modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-31 06:28:41
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-31 06:26:51
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ScareCrow432/PPO-LunarLander-v2
|
ScareCrow432
| 2023-01-31T11:09:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-31T05:56:01Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.49 +/- 21.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ashutoshmondal/pneumo_v3
|
ashutoshmondal
| 2023-01-31T10:50:06Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"autotrain",
"vision",
"dataset:ashutoshmondal/autotrain-data-pneumo",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-01-31T10:47:40Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- ashutoshmondal/autotrain-data-pneumo
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 1.9594067819084715
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 3177689678
- CO2 Emissions (in grams): 1.9594
## Validation Metrics
- Loss: 0.017
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
|
Elifr/clasificador-muchocine
|
Elifr
| 2023-01-31T10:41:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-31T10:39:56Z |
---
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4813
- Accuracy: 0.4439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3269 | 0.4155 |
| 1.4007 | 2.0 | 776 | 1.3847 | 0.4258 |
| 0.9989 | 3.0 | 1164 | 1.4813 | 0.4439 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Thyral/Testing
|
Thyral
| 2023-01-31T10:38:09Z | 0 | 0 | null |
[
"code",
"text-classification",
"de",
"dataset:allenai/soda",
"region:us"
] |
text-classification
| 2023-01-31T10:30:31Z |
---
datasets:
- allenai/soda
language:
- de
metrics:
- bleu
pipeline_tag: text-classification
tags:
- code
---
|
laamaai/clasificador-muchocine
|
laamaai
| 2023-01-31T10:22:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-31T10:20:57Z |
---
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3877
- Accuracy: 0.4439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3596 | 0.3884 |
| 1.4301 | 2.0 | 776 | 1.2666 | 0.4323 |
| 1.0491 | 3.0 | 1164 | 1.3877 | 0.4439 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
raquelsmv/clasificador-muchocine
|
raquelsmv
| 2023-01-31T10:20:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-31T10:19:48Z |
---
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3788
- Accuracy: 0.4555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3559 | 0.3961 |
| 1.4414 | 2.0 | 776 | 1.3217 | 0.4258 |
| 1.1139 | 3.0 | 1164 | 1.3788 | 0.4555 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
liweiliu/Taxi-v3
|
liweiliu
| 2023-01-31T09:55:27Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-31T09:55:19Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="liweiliu/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
liweiliu/q-FrozenLake-v1-4x4-noSlippery
|
liweiliu
| 2023-01-31T09:52:06Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-31T09:51:57Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="liweiliu/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dogeplusplus/stable-sam
|
dogeplusplus
| 2023-01-31T09:22:22Z | 4 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"stable-diffusion",
"text-to-image",
"sam-the-cat",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-01-09T18:49:12Z |
---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- sam-the-cat
widget:
- text: a photo of samruane cat
---
# sam

## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('dogeplusplus/stable-sam')
image = pipeline().images[0]
image
```
|
erniechiew/sd-class-butterflies-32
|
erniechiew
| 2023-01-31T09:08:19Z | 0 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-01-31T09:08:09Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('erniechiew/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
phoenixaiden33/en_pipeline
|
phoenixaiden33
| 2023-01-31T08:51:15Z | 0 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2023-01-31T08:50:49Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9952305246
- name: NER Recall
type: recall
value: 0.9984051037
- name: NER F Score
type: f_score
value: 0.9968152866
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.4,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (9 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `AGENT`, `ASSET`, `ASSET STATE`, `DATE`, `DETERMINAND`, `FLOW LEVEL`, `MEASUREMENT`, `OPERATION`, `PROCCESS` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 99.68 |
| `ENTS_P` | 99.52 |
| `ENTS_R` | 99.84 |
| `TOK2VEC_LOSS` | 21054.32 |
| `NER_LOSS` | 27455.52 |
|
amrisaurus/pretrained-m-bert-300
|
amrisaurus
| 2023-01-31T08:38:28Z | 1 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"pretraining",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | null | 2023-01-31T08:37:56Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: pretrained-m-bert-300
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pretrained-m-bert-300
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.8273
- Validation Loss: 15.6623
- Epoch: 299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2479 | 10.9372 | 0 |
| 7.7731 | 10.9191 | 1 |
| 6.8702 | 11.5201 | 2 |
| 6.4849 | 11.6086 | 3 |
| 6.3725 | 11.5271 | 4 |
| 6.3243 | 12.1350 | 5 |
| 6.4515 | 11.7665 | 6 |
| 6.0675 | 12.1761 | 7 |
| 5.9322 | 12.1155 | 8 |
| 6.0672 | 12.0390 | 9 |
| 5.9976 | 12.5114 | 10 |
| 5.9208 | 12.7953 | 11 |
| 5.9503 | 12.4924 | 12 |
| 5.9696 | 12.7799 | 13 |
| 6.0537 | 12.3489 | 14 |
| 5.8556 | 12.5165 | 15 |
| 5.8976 | 12.8338 | 16 |
| 5.9458 | 13.0800 | 17 |
| 5.8258 | 12.9819 | 18 |
| 5.8284 | 13.0523 | 19 |
| 5.8739 | 13.0829 | 20 |
| 5.7537 | 13.1990 | 21 |
| 5.8624 | 13.2222 | 22 |
| 5.8871 | 13.1393 | 23 |
| 5.7382 | 13.0271 | 24 |
| 5.6791 | 13.3209 | 25 |
| 5.8651 | 13.5971 | 26 |
| 5.7795 | 14.0682 | 27 |
| 5.7961 | 13.5632 | 28 |
| 5.9525 | 13.0326 | 29 |
| 5.8251 | 13.0935 | 30 |
| 5.7616 | 13.5397 | 31 |
| 5.9793 | 13.4677 | 32 |
| 5.6852 | 13.6610 | 33 |
| 5.7826 | 13.6501 | 34 |
| 5.7675 | 13.3981 | 35 |
| 5.7075 | 13.6568 | 36 |
| 5.8363 | 13.5032 | 37 |
| 5.8045 | 13.6162 | 38 |
| 5.8582 | 13.5919 | 39 |
| 5.6427 | 13.8740 | 40 |
| 5.7807 | 13.7311 | 41 |
| 5.7421 | 14.1702 | 42 |
| 5.7074 | 13.8185 | 43 |
| 5.7145 | 14.0385 | 44 |
| 5.6605 | 14.0947 | 45 |
| 5.6647 | 13.9634 | 46 |
| 5.6628 | 14.1416 | 47 |
| 5.6652 | 13.9625 | 48 |
| 5.8173 | 14.0109 | 49 |
| 5.8535 | 14.0783 | 50 |
| 5.6777 | 14.4908 | 51 |
| 5.7189 | 14.2846 | 52 |
| 5.7306 | 13.9430 | 53 |
| 5.9265 | 14.2692 | 54 |
| 5.6752 | 13.7434 | 55 |
| 5.8745 | 14.2234 | 56 |
| 5.7229 | 14.4659 | 57 |
| 5.7215 | 14.0766 | 58 |
| 5.7540 | 14.3406 | 59 |
| 5.7831 | 13.9421 | 60 |
| 5.6559 | 14.0940 | 61 |
| 5.6964 | 14.4394 | 62 |
| 5.6707 | 14.4002 | 63 |
| 5.7088 | 14.3143 | 64 |
| 5.7738 | 14.3808 | 65 |
| 5.7194 | 14.6182 | 66 |
| 5.7911 | 14.2589 | 67 |
| 5.9282 | 14.3536 | 68 |
| 5.8769 | 14.5976 | 69 |
| 5.7150 | 14.3358 | 70 |
| 5.6573 | 14.2675 | 71 |
| 5.8684 | 14.2212 | 72 |
| 5.6871 | 14.0757 | 73 |
| 5.7349 | 14.9877 | 74 |
| 5.8587 | 14.1604 | 75 |
| 5.8195 | 14.4759 | 76 |
| 5.7681 | 14.4587 | 77 |
| 5.7803 | 14.4228 | 78 |
| 5.6986 | 14.1285 | 79 |
| 5.7369 | 14.5417 | 80 |
| 5.7565 | 14.2100 | 81 |
| 5.7648 | 14.4228 | 82 |
| 5.6307 | 15.0572 | 83 |
| 5.8166 | 14.6594 | 84 |
| 5.7945 | 14.9603 | 85 |
| 5.8273 | 14.6196 | 86 |
| 5.6483 | 15.2973 | 87 |
| 5.7982 | 14.9318 | 88 |
| 5.7286 | 14.4151 | 89 |
| 5.7488 | 14.2480 | 90 |
| 5.7564 | 15.2868 | 91 |
| 5.7200 | 14.9984 | 92 |
| 5.6758 | 14.8934 | 93 |
| 5.8600 | 14.6392 | 94 |
| 5.6302 | 14.9115 | 95 |
| 5.7530 | 14.8292 | 96 |
| 5.6311 | 14.9683 | 97 |
| 5.6845 | 14.8707 | 98 |
| 5.7639 | 15.2866 | 99 |
| 5.7692 | 15.1005 | 100 |
| 5.7279 | 15.5260 | 101 |
| 5.8349 | 14.8966 | 102 |
| 5.7720 | 14.2529 | 103 |
| 5.6082 | 15.5972 | 104 |
| 5.7725 | 15.1931 | 105 |
| 5.8239 | 15.1119 | 106 |
| 5.7973 | 14.8203 | 107 |
| 5.7439 | 15.2762 | 108 |
| 5.7344 | 15.2897 | 109 |
| 5.8002 | 14.8071 | 110 |
| 5.7978 | 15.3206 | 111 |
| 5.8302 | 15.1250 | 112 |
| 5.6829 | 15.3822 | 113 |
| 5.8658 | 14.7853 | 114 |
| 5.7236 | 15.1413 | 115 |
| 5.8151 | 14.9191 | 116 |
| 5.6697 | 15.2308 | 117 |
| 5.8450 | 15.2055 | 118 |
| 5.6843 | 15.3117 | 119 |
| 5.7215 | 15.1254 | 120 |
| 5.8230 | 15.1992 | 121 |
| 5.7106 | 15.2795 | 122 |
| 5.7720 | 15.6248 | 123 |
| 5.7214 | 15.0411 | 124 |
| 5.6302 | 15.2897 | 125 |
| 5.7151 | 15.7383 | 126 |
| 5.7107 | 15.5989 | 127 |
| 5.6569 | 15.2202 | 128 |
| 5.9129 | 15.1588 | 129 |
| 5.5289 | 15.4879 | 130 |
| 5.7570 | 15.5103 | 131 |
| 5.8748 | 15.3842 | 132 |
| 5.7679 | 15.6996 | 133 |
| 5.6655 | 15.2690 | 134 |
| 5.7573 | 15.2401 | 135 |
| 5.7238 | 15.5996 | 136 |
| 5.7273 | 15.3198 | 137 |
| 5.7344 | 15.3389 | 138 |
| 5.8311 | 14.8744 | 139 |
| 5.6549 | 15.6956 | 140 |
| 5.6496 | 15.2694 | 141 |
| 5.7590 | 15.0076 | 142 |
| 5.7703 | 15.3850 | 143 |
| 5.7206 | 15.4296 | 144 |
| 5.8623 | 14.8546 | 145 |
| 5.7601 | 15.4164 | 146 |
| 5.7175 | 15.8795 | 147 |
| 5.6459 | 15.8282 | 148 |
| 5.8591 | 15.3127 | 149 |
| 5.7940 | 16.0000 | 150 |
| 5.8439 | 15.5051 | 151 |
| 5.7669 | 15.9199 | 152 |
| 5.6481 | 15.2306 | 153 |
| 5.7793 | 15.4377 | 154 |
| 5.8167 | 15.7849 | 155 |
| 5.7556 | 15.2991 | 156 |
| 5.7905 | 15.5514 | 157 |
| 5.5980 | 15.6595 | 158 |
| 5.7624 | 15.7794 | 159 |
| 5.7073 | 15.7131 | 160 |
| 5.7823 | 15.6013 | 161 |
| 5.6993 | 15.3206 | 162 |
| 5.8054 | 15.1585 | 163 |
| 5.7734 | 15.3361 | 164 |
| 5.6832 | 16.0706 | 165 |
| 5.6192 | 15.7624 | 166 |
| 5.8735 | 15.9157 | 167 |
| 5.7212 | 15.5399 | 168 |
| 5.7479 | 15.7155 | 169 |
| 5.6542 | 16.2107 | 170 |
| 5.7076 | 15.7150 | 171 |
| 5.7149 | 15.8730 | 172 |
| 5.8877 | 15.2373 | 173 |
| 5.6803 | 16.1623 | 174 |
| 5.7420 | 15.9171 | 175 |
| 5.6912 | 15.5799 | 176 |
| 5.7350 | 16.0120 | 177 |
| 5.6631 | 15.9157 | 178 |
| 5.7305 | 16.1250 | 179 |
| 5.7077 | 15.8018 | 180 |
| 5.6688 | 16.1011 | 181 |
| 5.7675 | 15.6628 | 182 |
| 5.6747 | 15.6886 | 183 |
| 5.7921 | 15.6053 | 184 |
| 5.6793 | 15.5329 | 185 |
| 5.6993 | 15.4673 | 186 |
| 5.8451 | 15.6634 | 187 |
| 5.7389 | 15.9733 | 188 |
| 5.7486 | 15.8548 | 189 |
| 5.7089 | 16.1267 | 190 |
| 5.8106 | 15.4471 | 191 |
| 5.7402 | 15.8568 | 192 |
| 5.6393 | 15.9586 | 193 |
| 5.7403 | 15.2678 | 194 |
| 5.7854 | 15.5638 | 195 |
| 5.5414 | 16.1871 | 196 |
| 5.7082 | 15.9706 | 197 |
| 5.6636 | 16.2550 | 198 |
| 5.6875 | 15.9385 | 199 |
| 5.7139 | 15.6730 | 200 |
| 5.6601 | 15.4174 | 201 |
| 5.6422 | 16.1655 | 202 |
| 5.7642 | 16.3103 | 203 |
| 5.7039 | 16.4020 | 204 |
| 5.7237 | 15.8775 | 205 |
| 5.7529 | 15.7237 | 206 |
| 5.6827 | 16.1514 | 207 |
| 5.7591 | 16.0905 | 208 |
| 5.7899 | 15.6417 | 209 |
| 5.7775 | 16.3878 | 210 |
| 5.6634 | 15.9944 | 211 |
| 5.5958 | 16.1042 | 212 |
| 5.8629 | 16.6206 | 213 |
| 5.7548 | 16.3826 | 214 |
| 5.7512 | 16.2234 | 215 |
| 5.6905 | 16.5029 | 216 |
| 5.6434 | 16.8345 | 217 |
| 5.6728 | 15.8749 | 218 |
| 5.7253 | 16.1679 | 219 |
| 5.6529 | 15.9138 | 220 |
| 5.6542 | 16.4299 | 221 |
| 5.6646 | 15.9442 | 222 |
| 5.7054 | 16.3624 | 223 |
| 5.7083 | 16.1256 | 224 |
| 5.8134 | 15.8207 | 225 |
| 5.7805 | 16.2750 | 226 |
| 5.7037 | 15.9758 | 227 |
| 5.7653 | 16.2336 | 228 |
| 5.7890 | 16.4635 | 229 |
| 5.7060 | 16.2425 | 230 |
| 5.7508 | 16.2569 | 231 |
| 5.6349 | 16.4228 | 232 |
| 5.7062 | 16.5237 | 233 |
| 5.7277 | 16.4191 | 234 |
| 5.7827 | 16.0735 | 235 |
| 5.7090 | 16.3830 | 236 |
| 5.6960 | 16.3506 | 237 |
| 5.7367 | 15.9862 | 238 |
| 5.7863 | 16.2742 | 239 |
| 5.5916 | 16.3640 | 240 |
| 5.6753 | 16.7890 | 241 |
| 5.6915 | 16.5041 | 242 |
| 5.7292 | 16.4998 | 243 |
| 5.7814 | 16.1040 | 244 |
| 5.6399 | 16.4167 | 245 |
| 5.6281 | 16.1772 | 246 |
| 5.7067 | 16.5245 | 247 |
| 5.7268 | 16.3465 | 248 |
| 5.7664 | 16.5136 | 249 |
| 5.7020 | 16.1559 | 250 |
| 5.6693 | 16.8744 | 251 |
| 5.6625 | 15.9549 | 252 |
| 5.6282 | 16.4120 | 253 |
| 5.6190 | 15.9476 | 254 |
| 5.6562 | 16.2114 | 255 |
| 5.6690 | 16.2859 | 256 |
| 5.7533 | 16.3209 | 257 |
| 5.7191 | 16.3224 | 258 |
| 5.8181 | 16.1149 | 259 |
| 5.6598 | 16.2559 | 260 |
| 5.6762 | 16.5949 | 261 |
| 5.6452 | 16.2653 | 262 |
| 5.6691 | 16.2993 | 263 |
| 5.7951 | 16.0316 | 264 |
| 5.8137 | 16.3896 | 265 |
| 5.7124 | 16.3996 | 266 |
| 5.7853 | 16.6237 | 267 |
| 5.7931 | 15.6052 | 268 |
| 5.7788 | 16.5983 | 269 |
| 5.7472 | 16.0878 | 270 |
| 5.6607 | 16.6207 | 271 |
| 5.8085 | 16.5659 | 272 |
| 5.7699 | 16.1165 | 273 |
| 5.6865 | 16.3090 | 274 |
| 5.7237 | 16.1727 | 275 |
| 5.8241 | 16.1545 | 276 |
| 5.6519 | 16.5434 | 277 |
| 5.6718 | 16.4884 | 278 |
| 5.6988 | 16.4953 | 279 |
| 5.7020 | 16.8616 | 280 |
| 5.7338 | 16.3847 | 281 |
| 5.6695 | 16.4040 | 282 |
| 5.6916 | 16.3199 | 283 |
| 5.7519 | 15.6585 | 284 |
| 5.7317 | 16.4947 | 285 |
| 5.8143 | 15.9633 | 286 |
| 5.6979 | 16.5859 | 287 |
| 5.7405 | 16.5161 | 288 |
| 5.7338 | 16.4144 | 289 |
| 5.5844 | 16.5315 | 290 |
| 5.6871 | 16.4282 | 291 |
| 5.8713 | 15.5593 | 292 |
| 5.6710 | 15.8436 | 293 |
| 5.7074 | 16.4072 | 294 |
| 5.6212 | 16.4969 | 295 |
| 5.7022 | 16.3911 | 296 |
| 5.6552 | 16.8670 | 297 |
| 5.7888 | 16.2774 | 298 |
| 5.8273 | 15.6623 | 299 |
### Framework versions
- Transformers 4.27.0.dev0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nijatzeynalov/mT5-based-azerbaijani-summarize
|
nijatzeynalov
| 2023-01-31T08:27:53Z | 27 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"az",
"dataset:nijatzeynalov/azerbaijani-multi-news",
"arxiv:1910.10683",
"arxiv:2010.11934",
"doi:10.57967/hf/0316",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-01-30T21:08:22Z |
---
license: creativeml-openrail-m
widget:
- text: >-
Ötən il Azərbaycana 74 577 avtomobil idxal edilib. Bu da 2021-ci illə
müqayisədə 16 617 ədəd və ya 18,2% azdır.
Xezerxeber.az-ın məlumatına görə, avtomobil bazarı üzrə qiymətləndirici
Sərxan Qədirov deyib ki, əvvəl ay ərzində 5-10 avtomobil gətirən şəxslər
hazırda bu sayı 2-3 ədədə endiriblər. Hətta ölkəyə nəqliyyat vasitələrinin
gətirilməsi işini dayandıranlar da var.
Nəqliyyat məsələləri üzrə ekspert Eldəniz Cəfərov isə bildirib ki,
gözləniləndən fərqli olaraq, ölkəyə idxal olunan kiçik mühərrikli
avtomobillərin sayında da azalma var. Bunun başlıca səbəbi Rusiyada
istehsalın dayandırılmasıdır.
Ekspertin sözlərinə görə, əvvəllər Azərbaycan bazarında Rusiya istehsalı
olan nəqliyyat vasitələri geniş yer tuturdu. Hazırda isə həmin ölkədən idxal
tam dayanıb.
datasets:
- nijatzeynalov/azerbaijani-multi-news
language:
- az
metrics:
- rouge
pipeline_tag: summarization
---
# mT5-small based Azerbaijani Summarization
In this model, [Google's Multilingual T5-small](https://github.com/google-research/multilingual-t5) is fine-tuned on [Azerbaijani News Summary Dataset](https://huggingface.co/datasets/nijatzeynalov/azerbaijani-multi-news) for **Summarization** downstream task. The model is trained with 3 epochs, 64 batch size and 10e-4 learning rate. It took almost 12 hours on GPU instance with Ubuntu Server 20.04 LTS image in Microsoft Azure. The max news length is kept as 2048 and max summary length is determined as 128.
mT5 is a multilingual variant of __T5__ and only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
excluding any supervised training. Therefore, the mT5 model has to be fine-tuned before it is useable on a downstream task.
### Text-to-Text Transfer Transformer (T5)
The paper [“Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer”](https://arxiv.org/pdf/1910.10683.pdf) presents a large-scale empirical survey to determine which transfer learning techniques work best and apply these insights at scale to create a new model called the Text-To-Text Transfer Transformer.

T5, or Text-to-Text Transfer Transformer, is a Transformer based architecture that uses a text-to-text approach. Every task – including translation, question answering, and classification – is cast as feeding the model text as input and training it to generate some target text. This allows for the use of the same model, loss function, hyperparameters, etc. across our diverse set of tasks.
The changes compared to BERT include:
- adding a causal decoder to the bidirectional architecture.
- replacing the fill-in-the-blank cloze task with a mix of alternative pre-training tasks.
The model was trained on a cleaned version of Common Crawl that is two orders of magnitude larger than Wikipedia.
The T5 model, pre-trained on C4, achieves state-of-the-art results on many NLP benchmarks while being flexible enough to be fine-tuned to several downstream tasks. The pre-trained T5 in Hugging Face is also trained on the mixture of unsupervised training (which is trained by reconstructing the masked sentence) and task-specific training.
### Multilingual t5
["mt5"](https://arxiv.org/pdf/2010.11934v3.pdf) is a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering
101 languages.
mT5 is pre-trained only by unsupervised manner with multiple languages, and it’s not trained for specific downstream tasks. To dare say, this pre-trained model has ability to build correct text in Azerbaijani, but it doesn’t have any ability for specific tasks, such as, summarization, correction, machine translation, etc.
In HuggingFace, several sizes of mT5 models are available, and here I used small one (google/mt5-small). Therefore I trained (fine-tune) this model for summarization in Azerbaijani using [Azerbaijani News Summary Dataset](https://huggingface.co/datasets/nijatzeynalov/azerbaijani-multi-news).
## Training hyperparameters
__mT5-based-azerbaijani-summarize__ model training took almost 12 hours on GPU instance with Ubuntu Server 20.04 LTS image in Microsoft Azure. The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 90
- num_epochs: 10
## Dataset
Model was trained on [__az-news-summary__ dataset](https://huggingface.co/datasets/nijatzeynalov/azerbaijani-multi-news), a comprehensive and diverse dataset comprising 143k (143,448) Azerbaijani news articles extracted using a set of carefully designed heuristics.
The dataset covers common topics for news reports include war, government, politics, education, health, the environment, economy, business, fashion, entertainment, and sport, as well as quirky or unusual events.
This dataset has 3 splits: _train_, _validation_, and _test_. \
Token counts are white space based.
| Dataset Split | Number of Instances | Size (MB) |
| ------------- | --------------------|:----------------------|
| Train | 100,413 | 150 |
| Validation | 14,344 | 21.3 |
| Test | 28,691 | 42.8 |
## Training results with comparison
__mT5-based-azerbaijani-summarize__ model rouge scores on the test set:
- Rouge1: 39.4222
- Rouge2: 24.8624
- Rougel: 32.2487
For __Azerbaijani text summarization downstream task__, mT5-multilingual-XLSum has also been developed on the 45 languages of [XL-Sum](https://huggingface.co/datasets/csebuetnlp/xlsum) dataset. For finetuning details and scripts,
see the [paper](https://aclanthology.org/2021.findings-acl.413/) and the [official repository](https://github.com/csebuetnlp/xl-sum). .
__mT5_multilingual_XLSum__ modelrouge scores on the XL-Sum test set (only for Azerbaijani):
- Rouge1: 21.4227
- Rouge2: 9.5214
- Rougel: 19.3331
As seen from the numbers, our model __mT5-based-azerbaijani-summarize__ achieves dramatically better performance than __mT5_multilingual_XLSum__.
## Using this model in transformers
```python
!pip install sentencepiece
!pip install transformers
```
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
article_text = """Ötən il Azərbaycana 74 577 avtomobil idxal edilib. Bu da 2021-ci illə müqayisədə 16 617 ədəd və ya 18,2% azdır.
Xezerxeber.az-ın məlumatına görə, avtomobil bazarı üzrə qiymətləndirici Sərxan Qədirov deyib ki, əvvəl ay ərzində 5-10 avtomobil gətirən şəxslər hazırda bu sayı 2-3 ədədə endiriblər. Hətta ölkəyə nəqliyyat vasitələrinin gətirilməsi işini dayandıranlar da var.
Nəqliyyat məsələləri üzrə ekspert Eldəniz Cəfərov isə bildirib ki, gözləniləndən fərqli olaraq, ölkəyə idxal olunan kiçik mühərrikli avtomobillərin sayında da azalma var. Bunun başlıca səbəbi Rusiyada istehsalın dayandırılmasıdır.
Ekspertin sözlərinə görə, əvvəllər Azərbaycan bazarında Rusiya istehsalı olan nəqliyyat vasitələri geniş yer tuturdu. Hazırda isə həmin ölkədən idxal tam dayanıb."""
model_name = "nijatzeynalov/mT5-based-azerbaijani-summarize"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
```
```python
input_ids = tokenizer(
article_text,
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=2048
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
max_length=128,
no_repeat_ngram_size=2,
num_beams=4
)[0]
summary = tokenizer.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(summary)
```
Result:
```python
Azərbaycana idxal olunan avtomobillərin sayı açıqlanıb
```
## Citation
If you use this model, please cite:
```
@misc {nijatzeynalov_2023,
author = { {NijatZeynalov} },
title = { mT5-based-azerbaijani-summarize (Revision 19930ab) },
year = 2023,
url = { https://huggingface.co/nijatzeynalov/mT5-based-azerbaijani-summarize },
doi = { 10.57967/hf/0316 },
publisher = { Hugging Face }
}
```
|
kakaobrain/karlo-v1-alpha-image-variations
|
kakaobrain
| 2023-01-31T08:27:48Z | 292 | 7 |
diffusers
|
[
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"diffusers:UnCLIPImageVariationPipeline",
"region:us"
] |
text-to-image
| 2023-01-30T19:46:46Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
# Karlo v1 alpha
Karlo is a text-conditional image generation model based on OpenAI's unCLIP architecture with the improvement over the standard super-resolution model from 64px to 256px, recovering high-frequency details only in the small number of denoising steps.
* [Original codebase](https://github.com/kakaobrain/karlo)
## Usage
Karlo is available in diffusers!
```python
pip install diffusers transformers accelerate safetensors
```
### Text to image
```python
from diffusers import UnCLIPPipeline
import torch
pipe = UnCLIPPipeline.from_pretrained("kakaobrain/karlo-v1-alpha", torch_dtype=torch.float16)
pipe = pipe.to('cuda')
prompt = "a high-resolution photograph of a big red frog on a green leaf."
image = pipe([prompt]).images[0]
image.save("./frog.png")
```

### Image variation
```python
from diffusers import UnCLIPImageVariationPipeline
import torch
from PIL import Image
pipe = UnCLIPImageVariationPipeline.from_pretrained("kakaobrain/karlo-v1-alpha-image-variations", torch_dtype=torch.float16)
pipe = pipe.to('cuda')
image = Image.open("./frog.png")
image = pipe(image).images[0]
image.save("./frog-variation.png")
```

## Model Architecture
### Overview
Karlo is a text-conditional diffusion model based on unCLIP, composed of prior, decoder, and super-resolution modules. In this repository, we include the improved version of the standard super-resolution module for upscaling 64px to 256px only in 7 reverse steps, as illustrated in the figure below:
<p float="left">
<img src="https://raw.githubusercontent.com/kakaobrain/karlo/main/assets/improved_sr_arch.jpg"/>
</p>
In specific, the standard SR module trained by DDPM objective upscales 64px to 256px in the first 6 denoising steps based on the respacing technique. Then, the additional fine-tuned SR module trained by [VQ-GAN](https://compvis.github.io/taming-transformers/)-style loss performs the final reverse step to recover high-frequency details. We observe that this approach is very effective to upscale the low-resolution in a small number of reverse steps.
### Details
We train all components from scratch on 115M image-text pairs including COYO-100M, CC3M, and CC12M. In the case of Prior and Decoder, we use ViT-L/14 provided by OpenAI’s [CLIP repository](https://github.com/openai/CLIP). Unlike the original implementation of unCLIP, we replace the trainable transformer in the decoder into the text encoder in ViT-L/14 for efficiency. In the case of the SR module, we first train the model using the DDPM objective in 1M steps, followed by additional 234K steps to fine-tune the additional component. The table below summarizes the important statistics of our components:
| | Prior | Decoder | SR |
|:------|----:|----:|----:|
| CLIP | ViT-L/14 | ViT-L/14 | - |
| #param | 1B | 900M | 700M + 700M |
| #optimization steps | 1M | 1M | 1M + 0.2M |
| #sampling steps | 25 | 50 (default), 25 (fast) | 7 |
|Checkpoint links| [ViT-L-14](https://arena.kakaocdn.net/brainrepo/models/karlo-public/v1.0.0.alpha/096db1af569b284eb76b3881534822d9/ViT-L-14.pt), [ViT-L-14 stats](https://arena.kakaocdn.net/brainrepo/models/karlo-public/v1.0.0.alpha/0b62380a75e56f073e2844ab5199153d/ViT-L-14_stats.th), [model](https://arena.kakaocdn.net/brainrepo/models/karlo-public/v1.0.0.alpha/efdf6206d8ed593961593dc029a8affa/decoder-ckpt-step%3D01000000-of-01000000.ckpt) | [model](https://arena.kakaocdn.net/brainrepo/models/karlo-public/v1.0.0.alpha/85626483eaca9f581e2a78d31ff905ca/prior-ckpt-step%3D01000000-of-01000000.ckpt) | [model](https://arena.kakaocdn.net/brainrepo/models/karlo-public/v1.0.0.alpha/4226b831ae0279020d134281f3c31590/improved-sr-ckpt-step%3D1.2M.ckpt) |
In the checkpoint links, ViT-L-14 is equivalent to the original version, but we include it for convenience. We also remark that ViT-L-14-stats is required to normalize the outputs of the prior module.
### Evaluation
We quantitatively measure the performance of Karlo-v1.0.alpha in the validation split of CC3M and MS-COCO. The table below presents CLIP-score and FID. To measure FID, we resize the image of the shorter side to 256px, followed by cropping it at the center. We set classifier-free guidance scales for prior and decoder to 4 and 8 in all cases. We observe that our model achieves reasonable performance even with 25 sampling steps of decoder.
CC3M
| Sampling step | CLIP-s (ViT-B/16) | FID (13k from val)|
|:------|----:|----:|
| Prior (25) + Decoder (25) + SR (7) | 0.3081 | 14.37 |
| Prior (25) + Decoder (50) + SR (7) | 0.3086 | 13.95 |
MS-COCO
| Sampling step | CLIP-s (ViT-B/16) | FID (30k from val)|
|:------|----:|----:|
| Prior (25) + Decoder (25) + SR (7) | 0.3192 | 15.24 |
| Prior (25) + Decoder (50) + SR (7) | 0.3192 | 14.43 |
For more information, please refer to the upcoming technical report.
### Training Details
This alpha version of Karlo is trained on 115M image-text pairs,
including [COYO](https://github.com/kakaobrain/coyo-dataset)-100M high-quality subset, CC3M, and CC12M.
For those who are interested in a better version of Karlo trained on more large-scale high-quality datasets,
please visit the landing page of our application [B^DISCOVER](https://bdiscover.kakaobrain.com/).
## BibTex
If you find this repository useful in your research, please cite:
```
@misc{kakaobrain2022karlo-v1-alpha,
title = {Karlo-v1.0.alpha on COYO-100M and CC15M},
author = {Donghoon Lee, Jiseob Kim, Jisu Choi, Jongmin Kim, Minwoo Byeon, Woonhyuk Baek and Saehoon Kim},
year = {2022},
howpublished = {\url{https://github.com/kakaobrain/karlo}},
}
```
|
nolanaatama/opwslora
|
nolanaatama
| 2023-01-31T08:25:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-01-31T08:19:02Z |
---
license: creativeml-openrail-m
---
|
amrisaurus/pretrained-m-bert-200
|
amrisaurus
| 2023-01-31T08:05:40Z | 1 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"pretraining",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | null | 2023-01-31T08:05:08Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: pretrained-m-bert-200
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pretrained-m-bert-200
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.6892
- Validation Loss: 15.9999
- Epoch: 199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2629 | 10.9400 | 0 |
| 7.8719 | 10.8986 | 1 |
| 6.8337 | 11.4901 | 2 |
| 6.4663 | 11.6037 | 3 |
| 6.4171 | 11.5051 | 4 |
| 6.3166 | 12.1207 | 5 |
| 6.4304 | 11.7927 | 6 |
| 6.0435 | 12.1347 | 7 |
| 5.9134 | 12.1229 | 8 |
| 6.0124 | 12.0225 | 9 |
| 5.9096 | 12.4855 | 10 |
| 5.8829 | 12.7256 | 11 |
| 5.8533 | 12.3504 | 12 |
| 5.8075 | 12.7843 | 13 |
| 6.0418 | 12.6493 | 14 |
| 5.8611 | 12.4900 | 15 |
| 5.8863 | 12.7790 | 16 |
| 5.9484 | 13.0246 | 17 |
| 5.8226 | 12.9865 | 18 |
| 5.8262 | 13.1064 | 19 |
| 5.8687 | 13.1811 | 20 |
| 5.7531 | 13.2824 | 21 |
| 5.8473 | 13.2894 | 22 |
| 5.8762 | 13.1719 | 23 |
| 5.7386 | 13.0748 | 24 |
| 5.6647 | 13.3089 | 25 |
| 5.8553 | 13.5698 | 26 |
| 5.7698 | 14.1035 | 27 |
| 5.7972 | 13.6096 | 28 |
| 5.9381 | 13.1142 | 29 |
| 5.8173 | 13.1007 | 30 |
| 5.7676 | 13.6502 | 31 |
| 5.9740 | 13.5317 | 32 |
| 5.6842 | 13.7206 | 33 |
| 5.7764 | 13.5819 | 34 |
| 5.7659 | 13.4004 | 35 |
| 5.7104 | 13.6715 | 36 |
| 5.8345 | 13.5589 | 37 |
| 5.8067 | 13.6957 | 38 |
| 5.8537 | 13.6661 | 39 |
| 5.6418 | 13.8966 | 40 |
| 5.7818 | 13.7630 | 41 |
| 5.7406 | 14.1682 | 42 |
| 5.7053 | 13.8797 | 43 |
| 5.7151 | 14.1307 | 44 |
| 5.6621 | 14.1855 | 45 |
| 5.6716 | 14.1013 | 46 |
| 5.6596 | 14.2236 | 47 |
| 5.6680 | 14.0390 | 48 |
| 5.8122 | 14.0500 | 49 |
| 5.8497 | 14.0991 | 50 |
| 5.6758 | 14.5258 | 51 |
| 5.7158 | 14.2373 | 52 |
| 5.7288 | 13.9851 | 53 |
| 5.9239 | 14.2297 | 54 |
| 5.6722 | 13.6866 | 55 |
| 5.8708 | 14.2755 | 56 |
| 5.7190 | 14.4764 | 57 |
| 5.7218 | 14.1861 | 58 |
| 5.7478 | 14.3363 | 59 |
| 5.7843 | 13.9645 | 60 |
| 5.6555 | 14.1351 | 61 |
| 5.6951 | 14.5155 | 62 |
| 5.6711 | 14.4671 | 63 |
| 5.7068 | 14.4064 | 64 |
| 5.7773 | 14.5143 | 65 |
| 5.7188 | 14.6878 | 66 |
| 5.7912 | 14.3496 | 67 |
| 5.9308 | 14.4187 | 68 |
| 5.8765 | 14.6648 | 69 |
| 5.7103 | 14.3686 | 70 |
| 5.6585 | 14.3171 | 71 |
| 5.8697 | 14.2778 | 72 |
| 5.6874 | 14.1511 | 73 |
| 5.7367 | 15.0222 | 74 |
| 5.8603 | 14.2226 | 75 |
| 5.8183 | 14.6257 | 76 |
| 5.7646 | 14.5472 | 77 |
| 5.7813 | 14.4560 | 78 |
| 5.6991 | 14.1486 | 79 |
| 5.7365 | 14.5998 | 80 |
| 5.7602 | 14.3595 | 81 |
| 5.7646 | 14.4916 | 82 |
| 5.6289 | 15.1076 | 83 |
| 5.8171 | 14.7216 | 84 |
| 5.7939 | 14.9316 | 85 |
| 5.8249 | 14.6632 | 86 |
| 5.6479 | 15.2074 | 87 |
| 5.7985 | 14.9238 | 88 |
| 5.7332 | 14.4504 | 89 |
| 5.7495 | 14.2924 | 90 |
| 5.7579 | 15.3362 | 91 |
| 5.7217 | 15.0819 | 92 |
| 5.6750 | 14.9618 | 93 |
| 5.8607 | 14.6850 | 94 |
| 5.6310 | 14.9199 | 95 |
| 5.7532 | 14.8353 | 96 |
| 5.6318 | 14.9707 | 97 |
| 5.6861 | 14.8903 | 98 |
| 5.7634 | 15.3237 | 99 |
| 5.7703 | 15.0675 | 100 |
| 5.7290 | 15.5422 | 101 |
| 5.8383 | 14.9575 | 102 |
| 5.7694 | 14.2810 | 103 |
| 5.6092 | 15.5547 | 104 |
| 5.7699 | 15.2309 | 105 |
| 5.8225 | 15.0764 | 106 |
| 5.8007 | 14.8694 | 107 |
| 5.7435 | 15.2683 | 108 |
| 5.7358 | 15.3533 | 109 |
| 5.8024 | 14.8301 | 110 |
| 5.8027 | 15.3505 | 111 |
| 5.8282 | 15.1353 | 112 |
| 5.6818 | 15.3525 | 113 |
| 5.8653 | 14.7720 | 114 |
| 5.7234 | 15.2079 | 115 |
| 5.8179 | 14.9355 | 116 |
| 5.6718 | 15.2269 | 117 |
| 5.8428 | 15.1447 | 118 |
| 5.6875 | 15.2709 | 119 |
| 5.7212 | 15.1541 | 120 |
| 5.8223 | 15.2145 | 121 |
| 5.7125 | 15.2783 | 122 |
| 5.7707 | 15.6087 | 123 |
| 5.7251 | 15.1095 | 124 |
| 5.6308 | 15.2443 | 125 |
| 5.7163 | 15.7562 | 126 |
| 5.7097 | 15.5930 | 127 |
| 5.6560 | 15.1742 | 128 |
| 5.9121 | 15.0983 | 129 |
| 5.5284 | 15.4298 | 130 |
| 5.7584 | 15.5905 | 131 |
| 5.8737 | 15.3326 | 132 |
| 5.7731 | 15.6967 | 133 |
| 5.6686 | 15.2850 | 134 |
| 5.7585 | 15.2779 | 135 |
| 5.7239 | 15.6021 | 136 |
| 5.7295 | 15.3237 | 137 |
| 5.7358 | 15.3199 | 138 |
| 5.8334 | 14.8834 | 139 |
| 5.6537 | 15.6226 | 140 |
| 5.6501 | 15.2466 | 141 |
| 5.7591 | 14.9815 | 142 |
| 5.7694 | 15.3828 | 143 |
| 5.7239 | 15.4082 | 144 |
| 5.8641 | 14.8029 | 145 |
| 5.7668 | 15.4207 | 146 |
| 5.7180 | 15.8702 | 147 |
| 5.6461 | 15.7631 | 148 |
| 5.8629 | 15.2891 | 149 |
| 5.7973 | 15.9778 | 150 |
| 5.8458 | 15.4747 | 151 |
| 5.7720 | 15.9476 | 152 |
| 5.6491 | 15.2055 | 153 |
| 5.7801 | 15.3822 | 154 |
| 5.8175 | 15.7697 | 155 |
| 5.7536 | 15.2464 | 156 |
| 5.7925 | 15.4849 | 157 |
| 5.6012 | 15.5773 | 158 |
| 5.7623 | 15.7559 | 159 |
| 5.7078 | 15.7061 | 160 |
| 5.7834 | 15.5417 | 161 |
| 5.7058 | 15.3236 | 162 |
| 5.8079 | 15.1048 | 163 |
| 5.7757 | 15.2895 | 164 |
| 5.6822 | 15.9946 | 165 |
| 5.6205 | 15.8053 | 166 |
| 5.8778 | 15.9524 | 167 |
| 5.7211 | 15.5006 | 168 |
| 5.7499 | 15.7000 | 169 |
| 5.6561 | 16.1970 | 170 |
| 5.7077 | 15.7324 | 171 |
| 5.7177 | 15.8832 | 172 |
| 5.8901 | 15.2579 | 173 |
| 5.6842 | 16.1185 | 174 |
| 5.7424 | 15.8840 | 175 |
| 5.6889 | 15.5184 | 176 |
| 5.7339 | 15.9269 | 177 |
| 5.6635 | 15.8283 | 178 |
| 5.7331 | 16.0767 | 179 |
| 5.7096 | 15.7523 | 180 |
| 5.6715 | 16.0680 | 181 |
| 5.7703 | 15.6030 | 182 |
| 5.6772 | 15.6442 | 183 |
| 5.7933 | 15.6118 | 184 |
| 5.6788 | 15.5001 | 185 |
| 5.6985 | 15.4559 | 186 |
| 5.8450 | 15.5850 | 187 |
| 5.7437 | 15.9233 | 188 |
| 5.7502 | 15.8410 | 189 |
| 5.7081 | 16.0491 | 190 |
| 5.8119 | 15.3163 | 191 |
| 5.7426 | 15.7990 | 192 |
| 5.6422 | 15.9709 | 193 |
| 5.7431 | 15.3411 | 194 |
| 5.7894 | 15.5860 | 195 |
| 5.5432 | 16.2503 | 196 |
| 5.7073 | 16.0347 | 197 |
| 5.6637 | 16.2954 | 198 |
| 5.6892 | 15.9999 | 199 |
### Framework versions
- Transformers 4.27.0.dev0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
duongkstn/a2c-AntBulletEnv-v0
|
duongkstn
| 2023-01-31T07:34:09Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-31T07:32:59Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 2039.26 +/- 43.90
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mojoee/Reinforce-pixelcopter
|
mojoee
| 2023-01-31T06:52:28Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-31T03:33:29Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 43.30 +/- 31.43
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
MrDivakaruni/ppo-SnowballTarget
|
MrDivakaruni
| 2023-01-31T06:37:58Z | 13 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-01-31T06:37:53Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: MrDivakaruni/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ksoky/whisper-large-khmer-asr
|
ksoky
| 2023-01-31T06:37:35Z | 93 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"km",
"dataset:openslr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-17T16:50:53Z |
---
language:
- km
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- openslr
metrics:
- wer
model-index:
- name: Whisper Large Khmer - Kak Soky
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: SLR42
type: openslr
args: 'config: km, split: test'
metrics:
- name: Wer
type: wer
value: 29.51830443159923
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large Khmer - Kak Soky
This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the SLR42 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2375
- Wer: 29.5183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0102 | 12.34 | 1000 | 0.2228 | 38.2659 |
| 0.0003 | 24.69 | 2000 | 0.2260 | 30.7900 |
| 0.0001 | 37.04 | 3000 | 0.2310 | 30.0578 |
| 0.0 | 49.38 | 4000 | 0.2375 | 29.5183 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
HuyenNguyen/Vigec-V6
|
HuyenNguyen
| 2023-01-31T06:21:58Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-31T01:52:50Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Vigec-V6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Vigec-V6
This model is a fine-tuned version of [VietAI/vit5-base](https://huggingface.co/VietAI/vit5-base) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1176
- eval_bleu: 90.2995
- eval_gen_len: 9.904
- eval_runtime: 72.4913
- eval_samples_per_second: 27.59
- eval_steps_per_second: 3.449
- epoch: 0.97
- step: 40000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 100000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ykurilov/realistic_vision_diff
|
ykurilov
| 2023-01-31T05:38:37Z | 2 | 1 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-01-30T13:51:28Z |
---
license: creativeml-openrail-m
---
|
akatak/distilbert-base-uncased-finetuned-emotion
|
akatak
| 2023-01-31T05:23:17Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-31T04:09:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9295
- name: F1
type: f1
value: 0.929584942435213
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2141
- Accuracy: 0.9295
- F1: 0.9296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.823 | 1.0 | 250 | 0.3048 | 0.905 | 0.9024 |
| 0.2448 | 2.0 | 500 | 0.2141 | 0.9295 | 0.9296 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
astein0/q-Taxi-v1
|
astein0
| 2023-01-31T05:19:47Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-30T23:56:58Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="astein0/q-Taxi-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sadaira/ppo-LunarLander-v2
|
sadaira
| 2023-01-31T05:15:20Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-31T05:14:53Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.66 +/- 19.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dhmeltzer/Reinforce-MLP_2
|
dhmeltzer
| 2023-01-31T04:32:32Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-31T04:32:25Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-MLP_2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
toshiouchiyama/whisper-small-ja
|
toshiouchiyama
| 2023-01-31T03:44:49Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ja",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-01-02T19:39:54Z |
---
language:
- ja
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-ja
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-ja
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3967
- Wer: 18.3755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.3 | 10 | 1.1627 | 26.0985 |
| No log | 0.61 | 20 | 0.7416 | 900.3995 |
| 1.2431 | 0.91 | 30 | 0.6344 | 60.3196 |
| 1.2431 | 1.21 | 40 | 0.5944 | 20.2397 |
| 0.5462 | 1.52 | 50 | 0.5341 | 19.3076 |
| 0.5462 | 1.82 | 60 | 0.4953 | 18.5087 |
| 0.5462 | 2.12 | 70 | 0.4715 | 19.9734 |
| 0.3259 | 2.42 | 80 | 0.4469 | 18.2423 |
| 0.3259 | 2.73 | 90 | 0.4246 | 19.7071 |
| 0.1986 | 3.03 | 100 | 0.4076 | 19.0413 |
| 0.1986 | 3.33 | 110 | 0.3949 | 17.7097 |
| 0.1986 | 3.64 | 120 | 0.4008 | 20.5060 |
| 0.1101 | 3.94 | 130 | 0.3892 | 18.3755 |
| 0.1101 | 4.24 | 140 | 0.3873 | 18.3755 |
| 0.0695 | 4.55 | 150 | 0.3930 | 19.7071 |
| 0.0695 | 4.85 | 160 | 0.3857 | 18.1092 |
| 0.0695 | 5.15 | 170 | 0.3861 | 19.0413 |
| 0.0467 | 5.45 | 180 | 0.3913 | 18.5087 |
| 0.0467 | 5.76 | 190 | 0.3963 | 18.7750 |
| 0.0346 | 6.06 | 200 | 0.3967 | 18.3755 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cpu
- Datasets 2.8.0
- Tokenizers 0.13.2
|
PingfengLuo/icefall-asr-conv-emformer-transducer-stateless2-zh
|
PingfengLuo
| 2023-01-31T03:43:21Z | 0 | 4 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-11-30T10:30:53Z |
---
license: apache-2.0
---
## Chinese-English-mixed ASR model using icefall_conv_emformer2
### Wenetspeech testset results
| TEST_NET | TEST_MEETING |
|----------|--------------|
| 9.64 | 9.2 | |
as log in `decoding_results/modified_beam_search_result`
### Training commond
```
python3 conv_emformer_transducer_stateless2/train.py --world-size 8 --num-epochs 30 --start-epoch 1 --exp-dir conv_emformer_transducer_stateless2/exp --max-duration 400 --master-port 12321 --num-encoder-layers 12 --chunk-length 32 --cnn-module-kernel 31 --left-context-length 32 --right-context-length 8 --memory-size 32
```
### Model unit is char+bpe as `data/lang_char_bpe/tokens.txt`
|
jwright94/ppo-SnowballTarget
|
jwright94
| 2023-01-31T03:32:55Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-01-31T03:32:49Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: jwright94/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
scy99/helloworld
|
scy99
| 2023-01-31T03:20:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain",
"zh",
"dataset:scy99/autotrain-data-todo",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-31T03:19:37Z |
---
tags:
- autotrain
- text-classification
language:
- zh
widget:
- text: "I love AutoTrain 🤗"
datasets:
- scy99/autotrain-data-todo
co2_eq_emissions:
emissions: 1.5063043935583178
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 3171489424
- CO2 Emissions (in grams): 1.5063
## Validation Metrics
- Loss: 0.339
- Accuracy: 0.848
- Precision: 0.679
- Recall: 0.721
- AUC: 0.906
- F1: 0.700
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/scy99/autotrain-todo-3171489424
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("scy99/autotrain-todo-3171489424", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("scy99/autotrain-todo-3171489424", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
jhn9803/distilbert-base-uncased-finetuned-clinc
|
jhn9803
| 2023-01-31T03:16:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-31T02:52:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9183870967741935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7721
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2896 | 1.0 | 318 | 3.2890 | 0.7432 |
| 2.6284 | 2.0 | 636 | 1.8756 | 0.8377 |
| 1.5483 | 3.0 | 954 | 1.1572 | 0.8961 |
| 1.015 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.7953 | 5.0 | 1590 | 0.7721 | 0.9184 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
BotsOne/utilitypole
|
BotsOne
| 2023-01-31T02:34:28Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-01-31T02:32:02Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### UtilityPole Dreambooth model trained by BotsOne with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
cohogain/whisper-medium-ga-IE-cv11-fleurs-livaud
|
cohogain
| 2023-01-31T02:24:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"dataset:common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-01-29T13:12:05Z |
---
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: openai/whisper-medium
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: ga-IE
split: test
args: ga-IE
metrics:
- name: Wer
type: wer
value: 35.22067363530778
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1422
- Wer: 35.2207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 7000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1137 | 4.02 | 1000 | 0.9072 | 40.0987 |
| 0.0153 | 9.02 | 2000 | 1.0351 | 38.7631 |
| 0.0042 | 14.01 | 3000 | 1.0507 | 36.4402 |
| 0.0013 | 19.0 | 4000 | 1.0924 | 36.2660 |
| 0.0003 | 23.02 | 5000 | 1.1422 | 35.2207 |
| 0.0001 | 28.02 | 6000 | 1.1688 | 35.3368 |
| 0.0001 | 33.01 | 7000 | 1.1768 | 35.5110 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
MatAIart/kurzgesagt-style-v2-768
|
MatAIart
| 2023-01-31T02:02:19Z | 12 | 9 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-12-02T15:51:46Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### Kurzgesagt-style-v2-768 Dreambooth model trained on the v2-768 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
Kurzgesagt style (use that on your prompt)

|
seongwoon/distilbert-base-uncased-finetuned-labor_space_v3
|
seongwoon
| 2023-01-31T01:48:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-01-31T01:13:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-labor_space_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-labor_space_v3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
sd-concepts-library/mofmof-style
|
sd-concepts-library
| 2023-01-31T01:32:52Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-01-31T01:32:39Z |
---
license: mit
---
### mofmof-style on Stable Diffusion
This is the `<mofmof>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:



|
Kaludi/Food-Classification
|
Kaludi
| 2023-01-31T01:15:08Z | 56 | 2 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"vision",
"dataset:Kaludi/data-food-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-01-29T18:45:51Z |
---
tags:
- vision
- image-classification
datasets:
- Kaludi/data-food-classification
widget:
- src: https://kristineskitchenblog.com/wp-content/uploads/2021/04/apple-pie-1200-square-592-2.jpg
example_title: Apple Pie
- src: https://upload.wikimedia.org/wikipedia/commons/d/da/Strawberry_ice_cream_cone_%285076899310%29.jpg
example_title: Ice Cream
- src: https://cdn.britannica.com/52/128652-050-14AD19CA/Maki-zushi.jpg
example_title: Sushi
co2_eq_emissions:
emissions: 2.7745203231331614
---
# Food Classification
This is a Food Image Classifier model that has been trained by [Kaludi](https://huggingface.co/Kaludi) to recognize 7 different types of popular foods, including **apple pie**, **falafel**, **french toast**, **ice cream**, **ramen**, **sushi**, and **tiramisu**. It can accurately classify an image of food into one of these categories by analyzing its visual features. This model can be used by food bloggers, restaurants, and recipe websites to quickly categorize and sort their food images, making it easier to manage their content and provide a better user experience.
### Gradio
Tis model supports a [Gradio](https://github.com/gradio-app/gradio) Web UI to run the data-food-classification model:
[](https://huggingface.co/spaces/Kaludi/Food-Classification_App)
## Validation Metrics
- Loss: 0.094
- Accuracy: 0.977
- Macro F1: 0.977
- Micro F1: 0.977
- Weighted F1: 0.977
- Macro Precision: 0.978
- Micro Precision: 0.977
- Weighted Precision: 0.978
- Macro Recall: 0.977
- Micro Recall: 0.977
- Weighted Recall: 0.977
|
francisco-perez-sorrosal/distilbert-base-uncased-finetuned-with-spanish-tweets-clf
|
francisco-perez-sorrosal
| 2023-01-31T00:36:11Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:dataset",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-30T21:25:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- dataset
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-base-uncased-finetuned-with-spanish-tweets-clf
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: dataset
type: dataset
config: 60-20-20
split: dev
args: 60-20-20
metrics:
- name: Accuracy
type: accuracy
value: 0.5701451278507257
- name: F1
type: f1
value: 0.5651604812495131
- name: Precision
type: precision
value: 0.5665667380442541
- name: Recall
type: recall
value: 0.5641613027059359
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-with-spanish-tweets-clf
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0580
- Accuracy: 0.5701
- F1: 0.5652
- Precision: 0.5666
- Recall: 0.5642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.0643 | 1.0 | 543 | 1.0457 | 0.4423 | 0.2761 | 0.5104 | 0.3712 |
| 0.9754 | 2.0 | 1086 | 0.9700 | 0.5155 | 0.4574 | 0.5190 | 0.4712 |
| 0.8145 | 3.0 | 1629 | 0.9691 | 0.5556 | 0.5544 | 0.5616 | 0.5506 |
| 0.6318 | 4.0 | 2172 | 1.0580 | 0.5701 | 0.5652 | 0.5666 | 0.5642 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
twilightBOO/pov-skin-textures-dreamlike-r34-v2
|
twilightBOO
| 2023-01-31T00:32:50Z | 12 | 9 |
diffusers
|
[
"diffusers",
"nsfw",
"stable diffusion",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-01-23T19:55:08Z |
---
license: openrail
tags:
- nsfw
- stable diffusion
---
# PoV Skin Textures - Dreamlike r34
[pov-skin-texture-dreamlike-r34](https://civitai.com/models/4481/pov-skin-texture-dreamlike-r34)
This version has vae-ft-mse-840000-ema-pruned.ckpt baked in.
Due to using Dreamlike Diffusion 1.0, this model has the following license:
License
This model is licensed under a modified CreativeML OpenRAIL-M license.
- You can't host or use the model or its derivatives on websites/apps/etc., from which you earn, will earn, or plan to earn revenue or donations. If you want to, please email us at [email protected]
- You are free to host the model card and files (Without any actual inference or finetuning) on both commercial and non-commercial websites/apps/etc. Please state the full model name (Dreamlike Diffusion 1.0) and include a link to the model card (https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0)
- You are free to host the model or its derivatives on completely non-commercial websites/apps/etc (Meaning you are not getting ANY revenue or donations). Please state the full model name (Dreamlike Diffusion 1.0) and include a link to the model card (https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0)
- You are free to use the outputs of the model or the outputs of the model's derivatives for commercial purposes in teams of 10 or less
- You can't use the model to deliberately produce nor share illegal or harmful outputs or content
- The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
- You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the modified CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0/blob/main/LICENSE.md
|
talitazahran/adlngnwn
|
talitazahran
| 2023-01-31T00:08:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-01-30T23:35:30Z |
---
license: creativeml-openrail-m
---
|
astein0/q-FrozenLake-v1-4x4-noSlippery
|
astein0
| 2023-01-30T23:45:49Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-30T23:45:45Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="astein0/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
PeterDerLustige/q-FrozenLake-v1-4x4-noSlippery
|
PeterDerLustige
| 2023-01-30T23:34:23Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-30T23:34:20Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="PeterDerLustige/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
andreids/en_textcat_sales
|
andreids
| 2023-01-30T23:31:54Z | 5 | 0 |
spacy
|
[
"spacy",
"text-classification",
"en",
"region:us"
] |
text-classification
| 2023-01-30T23:31:39Z |
---
tags:
- spacy
- text-classification
language:
- en
model-index:
- name: en_textcat_sales
results: []
---
| Feature | Description |
| --- | --- |
| **Name** | `en_textcat_sales` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.3,<3.5.0` |
| **Default Pipeline** | `textcat` |
| **Components** | `textcat` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (2 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`textcat`** | `OTHER`, `2100 - Sales` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `CATS_SCORE` | 83.00 |
| `CATS_MICRO_P` | 95.13 |
| `CATS_MICRO_R` | 95.13 |
| `CATS_MICRO_F` | 95.13 |
| `CATS_MACRO_P` | 94.91 |
| `CATS_MACRO_R` | 76.76 |
| `CATS_MACRO_F` | 83.00 |
| `CATS_MACRO_AUC` | 91.29 |
| `CATS_MACRO_AUC_PER_TYPE` | 0.00 |
| `TEXTCAT_LOSS` | 473.84 |
|
Tiemi/FunnyShihTzu-dog
|
Tiemi
| 2023-01-30T23:25:16Z | 4 | 9 |
diffusers
|
[
"diffusers",
"pytorch",
"stable-diffusion",
"text-to-image",
"diffusion-models-class",
"dreambooth-hackathon",
"animal",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-12-29T21:13:42Z |
---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- animal
widget:
- text: a cartoon digital art of FunnyShihTzu dog smiling
---
# DreamBooth model for the FunnyShihTzu concept trained by Tiemi on the Tiemi/FunnyShihTzu dataset.
This is a Stable Diffusion model fine-tuned on photos of my dog with DreamBooth 🐕.
It can be used by modifying the `instance_prompt` and keeping the tag FunnyShihTzu.
**Examples of prompts:**
- a cartoon digital art of FunnyShihTzu dog smiling
- a photo of FunnyShihTzu dog laying in the couch
- a funko pop of FunnyShihTzu dog smiling
Each time you run the prompt you'll see a different image (even with the same text).
If you enjoy this model, please give it a like ❤️.
## Description
This is a Stable Diffusion model fine-tuned on `dog` images for the animal theme.
## Photo of my dog:
<img src="https://s3.amazonaws.com/moonup/production/uploads/1672671005943-6192492551e3de53a3628c6b.jpeg" alt="shih_tzu" width="200"/>
## Examples of generated images:







## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('Tiemi/FunnyShihTzu-dog')
image = pipeline().images[0]
image
```
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
|
talitazahran/jenjen
|
talitazahran
| 2023-01-30T23:17:51Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-01-30T23:03:36Z |
---
license: creativeml-openrail-m
---
|
AliBuildsAI/sd-class-butterflies-32
|
AliBuildsAI
| 2023-01-30T22:25:04Z | 2 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-01-30T22:24:38Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('AliBuildsAI/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
huggingtweets/danidevyt
|
huggingtweets
| 2023-01-30T22:23:17Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-01-30T22:10:36Z |
---
language: en
thumbnail: http://www.huggingtweets.com/danidevyt/1675116733764/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1139870822934466562/-_KKMAE7_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dani</div>
<div style="text-align: center; font-size: 14px;">@danidevyt</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dani.
| Data | Dani |
| --- | --- |
| Tweets downloaded | 2070 |
| Retweets | 84 |
| Short tweets | 433 |
| Tweets kept | 1553 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bjcolos/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @danidevyt's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/rz82k3zq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/rz82k3zq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/danidevyt')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
epinnock/flan-t5-small-samsum
|
epinnock
| 2023-01-30T22:21:58Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-30T19:24:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: flan-t5-small-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-samsum
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 461 | nan | 41.7065 | 17.7336 | 34.2478 | 38.1372 | 16.8864 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.0+cu116
- Datasets 2.9.0
- Tokenizers 0.12.1
|
lotek93/a2c-PandaReachDense-v2
|
lotek93
| 2023-01-30T22:09:43Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-30T22:07:23Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.68 +/- 0.23
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
odiaz1066/a2c-AntBulletEnv-v0
|
odiaz1066
| 2023-01-30T22:06:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-30T22:05:34Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1190.35 +/- 89.58
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
robotman0/Reinforce-pixelcopter
|
robotman0
| 2023-01-30T21:37:02Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-30T20:03:56Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 33.10 +/- 28.05
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Lakoc/ppo-LunarLander-v2
|
Lakoc
| 2023-01-30T21:29:56Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-30T21:21:28Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 283.67 +/- 15.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_pretrain_mnli
|
gokuls
| 2023-01-30T21:28:46Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-30T16:36:16Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_sa_GLUE_Experiment_logit_kd_pretrain_mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8389951179820992
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_pretrain_mnli
This model is a fine-tuned version of [gokuls/mobilebert_sa_pre-training-complete](https://huggingface.co/gokuls/mobilebert_sa_pre-training-complete) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3782
- Accuracy: 0.8390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6657 | 1.0 | 3068 | 0.4271 | 0.8153 |
| 0.4271 | 2.0 | 6136 | 0.4219 | 0.8248 |
| 0.3376 | 3.0 | 9204 | 0.3896 | 0.8356 |
| 0.2799 | 4.0 | 12272 | 0.3866 | 0.8380 |
| 0.2397 | 5.0 | 15340 | 0.3847 | 0.8397 |
| 0.21 | 6.0 | 18408 | 0.3990 | 0.8403 |
| 0.1885 | 7.0 | 21476 | 0.3940 | 0.8380 |
| 0.1723 | 8.0 | 24544 | 0.4066 | 0.8373 |
| 0.1588 | 9.0 | 27612 | 0.3966 | 0.8388 |
| 0.149 | 10.0 | 30680 | 0.3883 | 0.8422 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
inseq/wmt20-mlqe-et-en
|
inseq
| 2023-01-30T21:15:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"fsmt",
"text2text-generation",
"translation",
"wmt20",
"en",
"et",
"multilingual",
"dataset:wmt/europarl",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-01-30T12:21:29Z |
---
language:
- en
- et
- multilingual
license: cc-by-sa-4.0
tags:
- translation
- wmt20
datasets:
- wmt/europarl
widget:
- text: "Jupiter on Päikesest kauguselt viies planeet ja Päikesesüsteemi kõige suurem planeet."
- text: "Plejaadid on Sõnni tähtkujus asuv hajusparv, mille Messier' kataloogi tähiseks on M45."
- text: "Palju on vaieldud Vikipeedia usaldatavuse ja täpsuse üle. Kritiseeritud on selle avatust vandaalidele, ebaühtlast kvaliteeti ja vasturääkivust, mitteneutraalsust ja konsensuse või populaarsuse eelistamist kvalifitseeritusele."
---
# Fairseq Et-En NMT WMT20 MLQE
This repository contains the Estonian-English model trained with the [fairseq toolkit](https://github.com/pytorch/fairseq) that was used to produce translations used in the WMT20 shared task on quality estimation (QE) on the [MLQE dataset](https://github.com/facebookresearch/mlqe).
The checkpoint was converted from the original fairseq checkpoint available [here](https://github.com/facebookresearch/mlqe/tree/master/nmt_models) using the `convert_fsmt_original_pytorch_checkpoint_to_pytorch.py` script from the 🤗 Transformers library (v4.26.0).
Please refer to the repositories linked above for additional information on usage, parameters and training data
|
eric-nlp/Cool_Model
|
eric-nlp
| 2023-01-30T21:08:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-30T21:07:00Z |
---
tags:
- generated_from_trainer
model-index:
- name: result
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [huawei-noah/TinyBERT_General_4L_312D](https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
inseq/wmt20-mlqe-en-zh
|
inseq
| 2023-01-30T21:07:49Z | 6 | 7 |
transformers
|
[
"transformers",
"pytorch",
"fsmt",
"text2text-generation",
"translation",
"wmt20",
"en",
"zh",
"multilingual",
"dataset:wmt/news-commentary",
"dataset:wmt/wikititles",
"dataset:wmt/uncorpus",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-01-30T12:22:22Z |
---
language:
- en
- zh
- multilingual
license: cc-by-sa-4.0
tags:
- translation
- wmt20
datasets:
- wmt/news-commentary
- wmt/wikititles
- wmt/uncorpus
widget:
- text: "It is a plump quail-shaped bird with white eyes and predominantly marbled black, rufous and pale brown plumage, marked prominently with white spots and stripes."
- text: "The 59th Primetime Creative Arts Emmy Awards honored the best in artistic and technical achievement in American prime time television programming from June 1, 2006, until May 31, 2007, as chosen by the Academy of Television Arts & Sciences."
- text: "While forests in temperate areas are readily categorised on the basis of tree canopy density, such schemes do not work well in tropical forests."
---
# Fairseq En-Zh NMT WMT20 MLQE
This repository contains the English-Chinese model trained with the [fairseq toolkit](https://github.com/pytorch/fairseq) that was used to produce translations used in the WMT20 shared task on quality estimation (QE) on the [MLQE dataset](https://github.com/facebookresearch/mlqe).
The checkpoint was converted from the original fairseq checkpoint available [here](https://github.com/facebookresearch/mlqe/tree/master/nmt_models) using the `convert_fsmt_original_pytorch_checkpoint_to_pytorch.py` script from the 🤗 Transformers library (v4.26.0).
Please refer to the repositories linked above for additional information on usage, parameters and training data
|
bhalll/q-FrozenLake-v1-4x4-noSlippery
|
bhalll
| 2023-01-30T21:04:03Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-30T21:04:01Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="bhalll/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
generateai/my_awesome_model4
|
generateai
| 2023-01-30T20:54:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-30T20:45:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 25.4886
- Accuracy: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.02
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6252 | 1.0 | 1 | 3.9768 | 0.0 |
| 1.0027 | 2.0 | 2 | 25.4886 | 0.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Celal11/resnet-50-finetuned-FER2013-0.003-CKPlus
|
Celal11
| 2023-01-30T20:54:32Z | 83 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-01-30T20:52:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: resnet-50-finetuned-FER2013-0.003-CKPlus
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9847715736040609
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-FER2013-0.003-CKPlus
This model is a fine-tuned version of [Celal11/resnet-50-finetuned-FER2013-0.003](https://huggingface.co/Celal11/resnet-50-finetuned-FER2013-0.003) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0614
- Accuracy: 0.9848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6689 | 0.97 | 27 | 0.1123 | 0.9797 |
| 0.2929 | 1.97 | 54 | 0.0614 | 0.9848 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
MtCelesteMa/bert-base-uncased-finetuned-multiglue
|
MtCelesteMa
| 2023-01-30T20:38:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"dataset:MtCelesteMa/multiglue",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-30T19:59:04Z |
---
license: apache-2.0
datasets:
- MtCelesteMa/multiglue
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is bert-base-uncased finetuned on the MultiGLUE dataset.
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** English
- **License:** Apache 2.0 (same as BERT)
- **Finetuned from model [optional]:** bert-base-uncased
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import numpy as np
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('bert-base-uncased')
model = transformers.AutoModelForSequenceClassification.from_pretrained('MtCelesteMa/bert-base-uncased-finetuned-multiglue')
task = 'cola'
sentence1 = 'Our friends won\'t buy this analysis, let alone the next one we propose.'
sentence2 = None
inputs = tokenizer(f'{task}:{sentence1}', f'{sentence2}', return_tensors='pt')
outputs = model(**inputs)
label = np.argmax(outputs.logits[0].detach().numpy())
print(label)
```
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** RTX A6000
- **Hours used:** 2
- **Cloud Provider:** [vast.ai](https://vast.ai)
- **Compute Region:** Sweden
- **Carbon Emitted:** 0.26 kg
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
JoshuaRubin/t5-small-finetuned-math_qa-problem-formula_rationale
|
JoshuaRubin
| 2023-01-30T20:25:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:math_qa",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-01T11:39:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- math_qa
model-index:
- name: t5-small-finetuned-math_qa-problem-formula_rationale
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-math_qa-problem-formula_rationale
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the math_qa dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
whispAI/ClaimBuster-DeBERTaV2
|
whispAI
| 2023-01-30T20:14:39Z | 198 | 1 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"autotrain",
"en",
"dataset:lucafrost/autotrain-data-claimbuster",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-30T19:52:01Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- lucafrost/autotrain-data-claimbuster
co2_eq_emissions:
emissions: 23.102349586537482
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 3165789318
- CO2 Emissions (in grams): 23.1023
## Validation Metrics
- Loss: 0.405
- Accuracy: 0.842
- Macro F1: 0.753
- Micro F1: 0.842
- Weighted F1: 0.843
- Macro Precision: 0.750
- Micro Precision: 0.842
- Weighted Precision: 0.844
- Macro Recall: 0.756
- Micro Recall: 0.842
- Weighted Recall: 0.842
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lucafrost/ClaimBuster-DeBERTaV2
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("lucafrost/ClaimBuster-DeBERTaV2", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("lucafrost/ClaimBuster-DeBERTaV2", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Celal11/resnet-50-finetuned-FER2013CKPlus-0.003
|
Celal11
| 2023-01-30T20:06:14Z | 82 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-01-30T20:02:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: resnet-50-finetuned-FER2013CKPlus-0.003
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-FER2013CKPlus-0.003
This model is a fine-tuned version of [Celal11/resnet-50-finetuned-FER2013-0.003](https://huggingface.co/Celal11/resnet-50-finetuned-FER2013-0.003) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0073
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8084 | 0.97 | 27 | 0.2004 | 0.9289 |
| 0.362 | 1.97 | 54 | 0.0828 | 0.9848 |
| 0.2972 | 2.97 | 81 | 0.0185 | 0.9949 |
| 0.1917 | 3.97 | 108 | 0.0132 | 1.0 |
| 0.1572 | 4.97 | 135 | 0.0073 | 1.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
lilouuch/Goodreads_Books_Reviews_distilbert
|
lilouuch
| 2023-01-30T20:04:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-28T16:00:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: Goodreads_Books_Reviews_distilbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Goodreads_Books_Reviews_distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [Goodreads Books Reviews dataset](https://www.kaggle.com/competitions/goodreads-books-reviews-290312/data).
It achieves the following results on the evaluation set:
- Loss: 0.9281
- F1: 0.6246
- Accuracy: 0.6338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:--------:|
| 0.9445 | 1.0 | 9780 | 0.9275 | 0.6058 | 0.6228 |
| 0.8688 | 2.0 | 19560 | 0.9090 | 0.6227 | 0.6291 |
| 0.7786 | 3.0 | 29340 | 0.9281 | 0.6246 | 0.6338 |
| 0.7039 | 4.0 | 39120 | 0.9576 | 0.6226 | 0.6314 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
lilouuch/Goodreads_Books_Reviews_BERT_51
|
lilouuch
| 2023-01-30T20:03:44Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-27T17:13:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: Goodreads_Books_Reviews_BERT_51
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Goodreads_Books_Reviews_BERT_51
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [Goodreads Books Reviews dataset](https://www.kaggle.com/competitions/goodreads-books-reviews-290312/data).
It achieves the following results on the evaluation set:
- Loss: 0.9079
- F1: 0.6366
- Accuracy: 0.6355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:--------:|
| 0.9474 | 1.0 | 7080 | 0.9415 | 0.6165 | 0.6179 |
| 0.8295 | 2.0 | 14160 | 0.9079 | 0.6366 | 0.6355 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
joheras/NASES-clara-med
|
joheras
| 2023-01-30T19:53:18Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"simplification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-12T16:33:41Z |
---
tags:
- simplification
- generated_from_trainer
metrics:
- rouge
model-index:
- name: NASES-clara-med
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NASES-clara-med
This model is a fine-tuned version of [ELiRF/NASES](https://huggingface.co/ELiRF/NASES) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2666
- Rouge1: 44.0787
- Rouge2: 26.1429
- Rougel: 38.4286
- Rougelsum: 38.5202
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 1.0 | 190 | 2.1442 | 43.6265 | 25.4681 | 37.6224 | 37.8012 |
| No log | 2.0 | 380 | 2.0839 | 44.0795 | 25.8075 | 37.9463 | 38.0445 |
| 1.8145 | 3.0 | 570 | 2.1689 | 43.3863 | 25.7517 | 37.4822 | 37.7461 |
| 1.8145 | 4.0 | 760 | 2.2569 | 43.9293 | 25.7951 | 37.9177 | 38.0658 |
| 0.6803 | 5.0 | 950 | 2.3760 | 43.9972 | 26.1618 | 38.4315 | 38.5305 |
| 0.6803 | 6.0 | 1140 | 2.4979 | 44.7986 | 27.0088 | 39.0031 | 39.1731 |
| 0.6803 | 7.0 | 1330 | 2.5881 | 43.8723 | 25.9782 | 38.1705 | 38.3225 |
| 0.2323 | 8.0 | 1520 | 2.6624 | 43.851 | 25.9263 | 38.2445 | 38.3659 |
| 0.2323 | 9.0 | 1710 | 2.7113 | 43.5292 | 25.4795 | 37.6883 | 37.8992 |
| 0.1464 | 10.0 | 1900 | 2.7451 | 44.6014 | 27.0125 | 38.9456 | 39.1796 |
| 0.1464 | 11.0 | 2090 | 2.7932 | 43.9568 | 26.0931 | 38.3672 | 38.5118 |
| 0.1464 | 12.0 | 2280 | 2.8651 | 43.8429 | 25.9007 | 38.0691 | 38.191 |
| 0.0863 | 13.0 | 2470 | 2.8978 | 44.192 | 26.1818 | 38.4167 | 38.579 |
| 0.0863 | 14.0 | 2660 | 2.9279 | 43.6745 | 25.6503 | 37.8948 | 38.0051 |
| 0.0657 | 15.0 | 2850 | 2.9942 | 44.1633 | 25.7856 | 38.0295 | 38.1905 |
| 0.0657 | 16.0 | 3040 | 2.9843 | 44.0347 | 25.9893 | 38.3486 | 38.5219 |
| 0.0657 | 17.0 | 3230 | 3.0189 | 44.3013 | 26.1884 | 38.5594 | 38.7396 |
| 0.0473 | 18.0 | 3420 | 3.0837 | 43.5877 | 25.6931 | 38.1147 | 38.2258 |
| 0.0473 | 19.0 | 3610 | 3.1025 | 44.1191 | 25.9657 | 38.338 | 38.5039 |
| 0.0302 | 20.0 | 3800 | 3.1395 | 44.393 | 26.3189 | 38.7891 | 38.8664 |
| 0.0302 | 21.0 | 3990 | 3.1808 | 44.4783 | 26.3023 | 38.4714 | 38.6428 |
| 0.0302 | 22.0 | 4180 | 3.1388 | 44.6364 | 26.7442 | 38.9591 | 39.1097 |
| 0.0194 | 23.0 | 4370 | 3.1859 | 44.919 | 26.9807 | 39.2653 | 39.3442 |
| 0.0194 | 24.0 | 4560 | 3.2126 | 44.4693 | 26.6534 | 38.8354 | 38.9278 |
| 0.0159 | 25.0 | 4750 | 3.1988 | 44.5436 | 26.63 | 38.9413 | 39.0007 |
| 0.0159 | 26.0 | 4940 | 3.2539 | 44.0378 | 26.0958 | 38.4445 | 38.5443 |
| 0.0159 | 27.0 | 5130 | 3.2844 | 44.6057 | 26.476 | 38.6502 | 38.7949 |
| 0.0117 | 28.0 | 5320 | 3.2755 | 44.1804 | 26.3747 | 38.6084 | 38.7027 |
| 0.0117 | 29.0 | 5510 | 3.2731 | 44.0453 | 26.0298 | 38.3911 | 38.4826 |
| 0.0102 | 30.0 | 5700 | 3.2666 | 44.0787 | 26.1429 | 38.4286 | 38.5202 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0
- Datasets 2.8.0
- Tokenizers 0.12.1
|
robotman0/Reinforce-v0
|
robotman0
| 2023-01-30T19:40:27Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-30T19:40:15Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
bhpardo/clasificador-muchocine
|
bhpardo
| 2023-01-30T18:52:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-30T18:51:03Z |
---
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3195
- Accuracy: 0.4297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3924 | 0.3652 |
| 1.4772 | 2.0 | 776 | 1.2545 | 0.4310 |
| 1.1251 | 3.0 | 1164 | 1.3195 | 0.4297 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Joqsan/custom-fnet-finetuned-rte
|
Joqsan
| 2023-01-30T18:30:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"my_fnet",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-30T18:24:20Z |
---
tags:
- generated_from_trainer
model-index:
- name: custom-fnet-finetuned-rte
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# custom-fnet-finetuned-rte
This model is a fine-tuned version of [Joqsan/custom-fnet](https://huggingface.co/Joqsan/custom-fnet) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
vuiseng9/jpqd-bert-large-lt-30eph-r0.0500-s5e15
|
vuiseng9
| 2023-01-30T18:27:34Z | 0 | 0 | null |
[
"onnx",
"region:us"
] | null | 2023-01-30T16:52:13Z |
# Joint Pruning, Quantization and Distillation for BERT-large/SQuADv1.1
## Setup
```bash
git clone https://github.com/vuiseng9/optimum-intel
cd optimum-intel
pip install -e .[openvino,nncf]
cd examples/openvino/question-answering/
pip install -r requirements.txt
pip install wandb # optional
```
## Run
```bash
NNCFCFG=/path/to/openvino_config.json
MASTER_PORT=<PORTID>
RUNID=<RUN_IDENTIFIER>
OUTDIR=/path/to/saved_model
NEPOCH=30
python -m torch.distributed.launch \
--nproc_per_node 4 \
--master_port $MASTER_PORT \
run_qa.py \
--model_name_or_path bert-large-uncased-whole-word-masking \
--dataset_name squad \
--teacher_model_or_path bert-large-uncased-whole-word-masking-finetuned-squad \
--distillation_weight 0.9 \
--do_eval \
--fp16 \
--do_train \
--learning_rate 3e-5 \
--num_train_epochs $NEPOCH \
--per_device_eval_batch_size 128 \
--per_device_train_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--logging_steps 1 \
--evaluation_strategy steps \
--eval_steps 250 \
--save_steps 500 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR \
--nncf_compression_config $NNCFCFG
```
### Reference Results
```
Global Step: 39500
F1: 92.482
EM: 86.594
Structured Sparsity (linear): 61.70%
Model Sparsity: 55.82%
```
|
harshvardhan96/output-results
|
harshvardhan96
| 2023-01-30T18:24:05Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-01-30T17:49:53Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - harshvardhan96/output-results
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a male character with beard using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




|
Joqsan/bert-base-uncased-finetuned-rte
|
Joqsan
| 2023-01-30T18:21:57Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-30T18:13:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-rte
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-rte
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
tomekkorbak/goofy_pasteur
|
tomekkorbak
| 2023-01-30T17:52:39Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-11-25T10:19:01Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: goofy_pasteur
results: []
---
# goofy_pasteur
- **Repository: https://github.com/tomekkorbak/aligned-pretraining-objectives**
- **Paper: Arxiv link to be added**
## Model description
This model was trained using [pile-detoxify](https://huggingface.co/datasets/tomekkorbak/pile-detoxify), which is data from [The Pile](https://huggingface.co/datasets/the_pile), annotated based on toxicity detected by [Detoxify](https://github.com/unitaryai/detoxify).
## Intended uses & limitations
This model has been trained to generate text that receives a low score for toxicity from [Detoxify](https://github.com/unitaryai/detoxify).
While we have promising results with the methods used to avoid toxic text, we cannot guarantee that it will output text that is fully aligned with non-toxicity in every situation.
This model and its associated datasets are intended for research purposes only and should not be deployed anywhere.
Please take care to avoid misusing the datasets used to train this model (where toxicity and personal identifiable information are annotated) or putting anybody in danger by publicizing their information.
## Training and evaluation data
This model was trained using [pile-detoxify](https://huggingface.co/datasets/tomekkorbak/pile-detoxify).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'goofy_pasteur',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/20d87pk8
|
shrikritisingh/my-setfit
|
shrikritisingh
| 2023-01-30T17:32:57Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-01-30T17:32:43Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 223 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 223,
"warmup_steps": 23,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
f-franco/ppo-LunarLander-v2
|
f-franco
| 2023-01-30T17:14:20Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-30T15:41:01Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 276.89 +/- 18.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kaliani/flair-ner-skill
|
kaliani
| 2023-01-30T16:43:40Z | 105 | 7 |
flair
|
[
"flair",
"pytorch",
"bert",
"token-classification",
"sequence-tagger-model",
"en",
"region:us"
] |
token-classification
| 2022-08-10T07:07:20Z |
---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en
widget:
- text: "Delphi SQL developer"
example_title: "Example 1"
- text: "Searching for new opportunities as Junior Node.js JavaScript backend developer. Over 15 years of experience in different IT areas. Experience with: Node.js JavaScript MongoDB HTML CSS Java Lotus Script websocket socket.io Docker babel Webpack MySQL JSON React"
example_title: "Example 2"
- text: "Experienced Chief Executive Officer with a demonstrated history of working in the wholesale industry. Skilled in Customer Service, Sales, Strategic Planning, and Business Development. Strong business development professional."
example_title: "Example 3"
---
## English NER in Flair (Ontonotes fast model)
F1-Score: **84.3** (Ontonotes)
Predicts 2 tags:
| tag | meaning |
|---------------------------------|-----------|
| SKILL | skill name |
| EXPERIENCE | year of experience |
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
|
HuyenNguyen/Vigec-V5
|
HuyenNguyen
| 2023-01-30T16:42:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-30T15:09:52Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: Vigec-V5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Vigec-V5
This model is a fine-tuned version of [VietAI/vit5-base](https://huggingface.co/VietAI/vit5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3694
- Bleu: 77.0736
- Gen Len: 10.0475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.195 | 0.01 | 500 | 0.9492 | 43.0845 | 7.2405 |
| 0.978 | 0.01 | 1000 | 0.7804 | 61.0671 | 9.7255 |
| 0.8418 | 0.02 | 1500 | 0.6798 | 64.3811 | 9.9025 |
| 0.8148 | 0.03 | 2000 | 0.6046 | 66.1944 | 10.043 |
| 0.7622 | 0.04 | 2500 | 0.5513 | 68.2851 | 10.1215 |
| 0.7199 | 0.04 | 3000 | 0.5146 | 69.7161 | 10.0795 |
| 0.7898 | 0.05 | 3500 | 0.4869 | 71.1868 | 10.079 |
| 0.6921 | 0.06 | 4000 | 0.4648 | 72.4203 | 10.0345 |
| 0.6827 | 0.07 | 4500 | 0.4490 | 73.2133 | 10.039 |
| 0.6102 | 0.07 | 5000 | 0.4355 | 73.6841 | 10.078 |
| 0.5805 | 0.08 | 5500 | 0.4176 | 74.2559 | 10.059 |
| 0.6806 | 0.09 | 6000 | 0.4081 | 74.7389 | 10.0655 |
| 0.6544 | 0.09 | 6500 | 0.3958 | 75.2603 | 10.025 |
| 0.6244 | 0.1 | 7000 | 0.3904 | 75.9306 | 10.0565 |
| 0.7212 | 0.11 | 7500 | 0.3822 | 76.3268 | 10.0505 |
| 0.5446 | 0.12 | 8000 | 0.3785 | 76.5306 | 10.0505 |
| 0.5574 | 0.12 | 8500 | 0.3741 | 76.7101 | 10.0545 |
| 0.6265 | 0.13 | 9000 | 0.3721 | 76.8858 | 10.043 |
| 0.5379 | 0.14 | 9500 | 0.3695 | 77.001 | 10.051 |
| 0.6164 | 0.14 | 10000 | 0.3694 | 77.0736 | 10.0475 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Mehtap/whisper-tiny-2023-01-30
|
Mehtap
| 2023-01-30T16:38:59Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"tr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-01-30T14:24:40Z |
---
language:
- tr
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
metrics:
- wer
model-index:
- name: tiny Turkish Whisper (tTW)
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny Turkish Whisper (tTW)
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Ermetal Meetings dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0735
- Wer: 1.4939
- Cer: 1.0558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1
### Training results
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu102
- Datasets 2.5.2
- Tokenizers 0.13.1
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_pretrain_wnli
|
gokuls
| 2023-01-30T16:33:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-30T16:32:33Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_sa_GLUE_Experiment_logit_kd_pretrain_wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.29577464788732394
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_pretrain_wnli
This model is a fine-tuned version of [gokuls/mobilebert_sa_pre-training-complete](https://huggingface.co/gokuls/mobilebert_sa_pre-training-complete) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3677
- Accuracy: 0.2958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3708 | 1.0 | 5 | 0.3927 | 0.3944 |
| 0.3555 | 2.0 | 10 | 0.3715 | 0.4225 |
| 0.3493 | 3.0 | 15 | 0.3677 | 0.2958 |
| 0.3485 | 4.0 | 20 | 0.3704 | 0.3803 |
| 0.3454 | 5.0 | 25 | 0.3815 | 0.2394 |
| 0.3461 | 6.0 | 30 | 0.3878 | 0.2394 |
| 0.3432 | 7.0 | 35 | 0.3962 | 0.2535 |
| 0.3427 | 8.0 | 40 | 0.4050 | 0.1972 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_pretrain_stsb
|
gokuls
| 2023-01-30T16:31:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-30T16:18:17Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: mobilebert_sa_GLUE_Experiment_logit_kd_pretrain_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8642221596976783
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_pretrain_stsb
This model is a fine-tuned version of [gokuls/mobilebert_sa_pre-training-complete](https://huggingface.co/gokuls/mobilebert_sa_pre-training-complete) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2919
- Pearson: 0.8665
- Spearmanr: 0.8642
- Combined Score: 0.8654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 1.1501 | 1.0 | 45 | 0.4726 | 0.7774 | 0.7922 | 0.7848 |
| 0.364 | 2.0 | 90 | 0.3480 | 0.8457 | 0.8455 | 0.8456 |
| 0.259 | 3.0 | 135 | 0.3156 | 0.8582 | 0.8590 | 0.8586 |
| 0.2054 | 4.0 | 180 | 0.4231 | 0.8551 | 0.8549 | 0.8550 |
| 0.1629 | 5.0 | 225 | 0.3245 | 0.8668 | 0.8654 | 0.8661 |
| 0.1263 | 6.0 | 270 | 0.3192 | 0.8649 | 0.8625 | 0.8637 |
| 0.1021 | 7.0 | 315 | 0.3337 | 0.8655 | 0.8629 | 0.8642 |
| 0.0841 | 8.0 | 360 | 0.3061 | 0.8601 | 0.8577 | 0.8589 |
| 0.0713 | 9.0 | 405 | 0.3600 | 0.8576 | 0.8555 | 0.8566 |
| 0.0587 | 10.0 | 450 | 0.3135 | 0.8620 | 0.8600 | 0.8610 |
| 0.0488 | 11.0 | 495 | 0.3006 | 0.8641 | 0.8620 | 0.8631 |
| 0.0441 | 12.0 | 540 | 0.3308 | 0.8645 | 0.8621 | 0.8633 |
| 0.0385 | 13.0 | 585 | 0.3468 | 0.8620 | 0.8601 | 0.8610 |
| 0.0346 | 14.0 | 630 | 0.3175 | 0.8658 | 0.8634 | 0.8646 |
| 0.0298 | 15.0 | 675 | 0.2919 | 0.8665 | 0.8642 | 0.8654 |
| 0.0299 | 16.0 | 720 | 0.3103 | 0.8649 | 0.8628 | 0.8639 |
| 0.0263 | 17.0 | 765 | 0.3325 | 0.8620 | 0.8599 | 0.8609 |
| 0.0237 | 18.0 | 810 | 0.3092 | 0.8636 | 0.8611 | 0.8623 |
| 0.0213 | 19.0 | 855 | 0.3169 | 0.8653 | 0.8631 | 0.8642 |
| 0.0196 | 20.0 | 900 | 0.2985 | 0.8647 | 0.8624 | 0.8636 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
phonenix/CartPole-v1
|
phonenix
| 2023-01-30T16:31:27Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-28T16:32:33Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
erebusnorms/q-Taxi-v3
|
erebusnorms
| 2023-01-30T16:16:12Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-30T16:16:08Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.42 +/- 2.77
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="erebusnorms/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
erkam/sd-pokemon-model-lora
|
erkam
| 2023-01-30T16:15:48Z | 4 | 4 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2",
"base_model:adapter:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-01-29T22:21:23Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - https://huggingface.co/erkam/sd-pokemon-model-lora
These are LoRA adaption weights for https://huggingface.co/erkam/sd-pokemon-model-lora. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
kuan2/taxi
|
kuan2
| 2023-01-30T16:15:40Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-30T16:15:35Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.77
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="kuan2/taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
erebusnorms/q-FrozenLake-v1-4x4-noSlippery
|
erebusnorms
| 2023-01-30T16:14:34Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-30T16:14:31Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="erebusnorms/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kuan2/q-FrozenLake-v1-4x4-noSlippery
|
kuan2
| 2023-01-30T16:14:19Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-30T16:14:15Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="kuan2/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
huggingtweets/muzhroommama
|
huggingtweets
| 2023-01-30T16:13:31Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-01-28T03:31:07Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1597709018142855170/e0xfVtT4_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">silly little time</div>
<div style="text-align: center; font-size: 14px;">@muzhroommama</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from silly little time.
| Data | silly little time |
| --- | --- |
| Tweets downloaded | 236 |
| Retweets | 87 |
| Short tweets | 32 |
| Tweets kept | 117 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/xaynl4xc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @muzhroommama's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/x523rtvl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/x523rtvl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/muzhroommama')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Aditya02/stt_en_citrinet_1024
|
Aditya02
| 2023-01-30T16:11:20Z | 5 | 0 |
nemo
|
[
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"CTC",
"Citrinet",
"Transformer",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2104.01721",
"license:cc-by-4.0",
"model-index",
"region:us"
] |
automatic-speech-recognition
| 2023-01-30T15:58:44Z |
---
language:
- en
library_name: nemo
datasets:
- librispeech_asr
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- CTC
- Citrinet
- Transformer
- pytorch
- NeMo
- hf-asr-leaderboard
license: cc-by-4.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: stt_en_citrinet_1024_ls
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 2.5
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 6.3
---
# NVIDIA Citrinet CTC 1924 Librispeech (en-US)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
| [](#deployment-with-nvidia-riva) |
This model transcribes speech in lower case English alphabet along with spaces and apostrophes.
It is an "large" versions of Citrinet-CTC (around 140M parameters) model.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#citrinet) for complete architecture details.
It is also compatible with NVIDIA Riva for [production-grade server deployments](#deployment-with-nvidia-riva).
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("nvidia/stt_en_citrinet_1024_ls")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_en_citrinet_1024_ls"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
Citrinet-CTC model is an autoregressive variant of Citrinet model [1] for Automatic Speech Recognition which uses CTC loss/decoding instead of Transducer Loss. You may find more info on the detail of this model here: [Citrinet Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html).
## Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_ctc/speech_to_text_ctc_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/citrinet/citrinet_1024.yaml) (Note: Change the `model.model_defaults.filters` to match the model size).
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
### Datasets
All the models in this collection are trained on a just the Librispeech Dataset:
- Librispeech 960 hours of English speech
## Performance
The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
| Version | Tokenizer | Vocabulary Size | LS test-other | LS test-clean |
|---------|---------------------------|-----------------|---------------|---------------|
| 1.0.0 | SentencePiece Unigram [2] | 256 | 6.3 | 2.5 |
## Limitations
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## Deployment with NVIDIA Riva
For the best real-time accuracy, latency, and throughput, deploy the model with [NVIDIA Riva](https://developer.nvidia.com/riva), an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, at the edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and Enterprise-grade support
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
[1] [ Citrinet: Closing the Gap between Non-Autoregressive and Autoregressive End-to-End Models for Automatic Speech Recognition](https://arxiv.org/abs/2104.01721)
[2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
## Licence
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
|
AdelaZ/adelacq-dog-heywhale
|
AdelaZ
| 2023-01-30T15:34:29Z | 0 | 1 |
diffusers
|
[
"diffusers",
"pytorch",
"stable-diffusion",
"text-to-image",
"diffusion-models-class",
"dreambooth-hackathon",
"animal",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-01-16T17:12:56Z |
---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- animal
widget:
- text: a adelacq dog sitting on top of the deck of a battle ship traveling through
the open sea with a lot of ships surrounding it
---
# DreamBooth model for the adelacq concept trained by AdelaZ.
This is a Stable Diffusion model fine-tuned on the adelacq concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of adelacq dog**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `dog` images for the animal theme,
for the Hugging Face DreamBooth Hackathon, from the HF CN Community,
corporated with the HeyWhale.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('AdelaZ/adelacq-dog-heywhale')
image = pipeline().images[0]
image
```
|
Joqsan/bert-base-uncased-finetuned-qnli
|
Joqsan
| 2023-01-30T15:29:20Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-30T12:53:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-qnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-qnli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ahmetayrnc/spanbert-base-cased
|
ahmetayrnc
| 2023-01-30T15:12:41Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:silicone",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-30T12:06:11Z |
---
tags:
- generated_from_trainer
datasets:
- silicone
metrics:
- accuracy
model-index:
- name: spanbert-base-cased
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: silicone
type: silicone
config: swda
split: test
args: swda
metrics:
- name: Accuracy
type: accuracy
value: 0.7114959469417833
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the silicone dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0346
- Accuracy: 0.7115
- Micro-precision: 0.7115
- Micro-recall: 0.7115
- Micro-f1: 0.7115
- Macro-precision: 0.2484
- Macro-recall: 0.2508
- Macro-f1: 0.2412
- Weighted-precision: 0.6569
- Weighted-recall: 0.7115
- Weighted-f1: 0.6741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Micro-precision | Micro-recall | Micro-f1 | Macro-precision | Macro-recall | Macro-f1 | Weighted-precision | Weighted-recall | Weighted-f1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 1.043 | 1.0 | 2980 | 1.0346 | 0.7115 | 0.7115 | 0.7115 | 0.7115 | 0.2484 | 0.2508 | 0.2412 | 0.6569 | 0.7115 | 0.6741 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
optimum/bert-base-uncased-for-masked-lm
|
optimum
| 2023-01-30T14:59:19Z | 13 | 0 |
transformers
|
[
"transformers",
"onnx",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-01-30T13:25:39Z |
---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
This model is exported for masked-lm task with the following command:
```
python3 -m optimum.exporters.onnx --model bert-base-cased --for-ort --task masked-lm models/
```
If you want to use `bert-base-uncased` for other tasks, please export the ONNX model with your corresponding task.
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Model variations
BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers.
Chinese and multilingual uncased and cased versions followed shortly after.
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.
Other 24 smaller models are released afterward.
The detailed release history can be found on the [google-research/bert readme](https://github.com/google-research/bert/blob/master/README.md) on github.
| Model | #params | Language |
|------------------------|--------------------------------|-------|
| [`bert-base-uncased`](https://huggingface.co/bert-base-uncased) | 110M | English |
| [`bert-large-uncased`](https://huggingface.co/bert-large-uncased) | 340M | English | sub
| [`bert-base-cased`](https://huggingface.co/bert-base-cased) | 110M | English |
| [`bert-large-cased`](https://huggingface.co/bert-large-cased) | 340M | English |
| [`bert-base-chinese`](https://huggingface.co/bert-base-chinese) | 110M | Chinese |
| [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) | 110M | Multiple |
| [`bert-large-uncased-whole-word-masking`](https://huggingface.co/bert-large-uncased-whole-word-masking) | 340M | English |
| [`bert-large-cased-whole-word-masking`](https://huggingface.co/bert-large-cased-whole-word-masking) | 340M | English |
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling from the [Optimum library](https://huggingface.co/docs/optimum/main/en/index):
```python
>>> from optimum.pipelines import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased', accelerator="ort")
>>> unmasker("The capital of France is [MASK].")
[{'score': 0.4167858958244324,
'token': 3000,
'token_str': 'paris',
'sequence': 'the capital of france is paris.'},
{'score': 0.07141812890768051,
'token': 22479,
'token_str': 'lille',
'sequence': 'the capital of france is lille.'},
{'score': 0.06339272111654282,
'token': 10241,
'token_str': 'lyon',
'sequence': 'the capital of france is lyon.'},
{'score': 0.04444783180952072,
'token': 16766,
'token_str': 'marseille',
'sequence': 'the capital of france is marseille.'},
{'score': 0.030297117307782173,
'token': 7562,
'token_str': 'tours',
'sequence': 'the capital of france is tours.'}
]
```
Here is how to use this model to fill the masked token with ONNX Runtime backend:
```python
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
model = ORTModelForMaskedLM.from_pretrained("bert-base-uncased", from_transformers=True)
text = "The capital of France is [MASK]."
inputs = tokenizer(text, return_tensors="pt")
logits = model(**inputs)
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
tokenizer.decode(predicted_token_id)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from optimum.pipelines import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased', accelerator="ort")
>>> unmasker("The man worked as a [MASK].")
[{'score': 0.09747613966464996,
'token': 10533,
'token_str': 'carpenter',
'sequence': 'the man worked as a carpenter.'},
{'score': 0.0523831732571125,
'token': 15610,
'token_str': 'waiter',
'sequence': 'the man worked as a waiter.'},
{'score': 0.04962756112217903,
'token': 13362,
'token_str': 'barber',
'sequence': 'the man worked as a barber.'},
{'score': 0.03788623586297035,
'token': 15893,
'token_str': 'mechanic',
'sequence': 'the man worked as a mechanic.'},
{'score': 0.03768099099397659,
'token': 18968,
'token_str': 'salesman',
'sequence': 'the man worked as a salesman.'}]
>>> unmasker("The woman worked as a [MASK].")
[{'score': 0.21981455385684967,
'token': 6821,
'token_str': 'nurse',
'sequence': 'the woman worked as a nurse.'},
{'score': 0.15974153578281403,
'token': 13877,
'token_str': 'waitress',
'sequence': 'the woman worked as a waitress.'},
{'score': 0.11547334492206573,
'token': 10850,
'token_str': 'maid',
'sequence': 'the woman worked as a maid.'},
{'score': 0.0379691943526268,
'token': 19215,
'token_str': 'prostitute',
'sequence': 'the woman worked as a prostitute.'},
{'score': 0.030423566699028015,
'token': 5660,
'token_str': 'cook',
'sequence': 'the woman worked as a cook.'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ahmetayrnc/distilroberta-base
|
ahmetayrnc
| 2023-01-30T14:56:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:silicone",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T13:22:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- silicone
metrics:
- accuracy
model-index:
- name: distilroberta-base
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: silicone
type: silicone
config: swda
split: test
args: swda
metrics:
- name: Accuracy
type: accuracy
value: 0.7111274871039057
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the silicone dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9647
- Accuracy: 0.7111
- Micro-precision: 0.7111
- Micro-recall: 0.7111
- Micro-f1: 0.7111
- Macro-precision: 0.3228
- Macro-recall: 0.2866
- Macro-f1: 0.2824
- Weighted-precision: 0.6683
- Weighted-recall: 0.7111
- Weighted-f1: 0.6768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Micro-precision | Micro-recall | Micro-f1 | Macro-precision | Macro-recall | Macro-f1 | Weighted-precision | Weighted-recall | Weighted-f1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 0.9578 | 1.0 | 2980 | 0.9647 | 0.7111 | 0.7111 | 0.7111 | 0.7111 | 0.3228 | 0.2866 | 0.2824 | 0.6683 | 0.7111 | 0.6768 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Laurie/sentiment-classify
|
Laurie
| 2023-01-30T14:56:36Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-30T13:48:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: sentiment-classify
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93032
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-classify
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2395
- Accuracy: 0.9303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2317 | 1.0 | 1563 | 0.1850 | 0.928 |
| 0.1448 | 2.0 | 3126 | 0.2395 | 0.9303 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_mnli_128
|
gokuls
| 2023-01-30T14:32:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-30T06:51:57Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_sa_GLUE_Experiment_logit_kd_mnli_128
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5949959316517494
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_mnli_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2689
- Accuracy: 0.5950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.6825 | 1.0 | 3068 | 1.4581 | 0.5256 |
| 1.4941 | 2.0 | 6136 | 1.3516 | 0.5680 |
| 1.4199 | 3.0 | 9204 | 1.3259 | 0.5712 |
| 1.3747 | 4.0 | 12272 | 1.3024 | 0.5856 |
| 1.34 | 5.0 | 15340 | 1.2875 | 0.5931 |
| 1.3087 | 6.0 | 18408 | 1.2730 | 0.5928 |
| 1.2769 | 7.0 | 21476 | 1.2845 | 0.5916 |
| 1.246 | 8.0 | 24544 | 1.2750 | 0.5965 |
| 1.2166 | 9.0 | 27612 | 1.2651 | 0.6020 |
| 1.1883 | 10.0 | 30680 | 1.2773 | 0.6043 |
| 1.1604 | 11.0 | 33748 | 1.2555 | 0.6011 |
| 1.1329 | 12.0 | 36816 | 1.2792 | 0.5991 |
| 1.1074 | 13.0 | 39884 | 1.2891 | 0.5986 |
| 1.0812 | 14.0 | 42952 | 1.2889 | 0.5947 |
| 1.0577 | 15.0 | 46020 | 1.2871 | 0.5970 |
| 1.0338 | 16.0 | 49088 | 1.3296 | 0.6026 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_mnli_256
|
gokuls
| 2023-01-30T14:21:19Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-30T06:57:33Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_sa_GLUE_Experiment_logit_kd_mnli_256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.6119812855980472
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_mnli_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2282
- Accuracy: 0.6120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.6433 | 1.0 | 3068 | 1.4078 | 0.5457 |
| 1.4683 | 2.0 | 6136 | 1.3590 | 0.5658 |
| 1.4077 | 3.0 | 9204 | 1.3106 | 0.5772 |
| 1.3591 | 4.0 | 12272 | 1.2971 | 0.5904 |
| 1.3213 | 5.0 | 15340 | 1.2764 | 0.5957 |
| 1.2849 | 6.0 | 18408 | 1.2562 | 0.6029 |
| 1.2475 | 7.0 | 21476 | 1.2524 | 0.6038 |
| 1.2073 | 8.0 | 24544 | 1.2384 | 0.6066 |
| 1.1713 | 9.0 | 27612 | 1.2377 | 0.6109 |
| 1.1371 | 10.0 | 30680 | 1.2228 | 0.6077 |
| 1.1069 | 11.0 | 33748 | 1.2126 | 0.6196 |
| 1.0775 | 12.0 | 36816 | 1.2232 | 0.6271 |
| 1.0491 | 13.0 | 39884 | 1.2440 | 0.6110 |
| 1.0228 | 14.0 | 42952 | 1.2741 | 0.6079 |
| 0.9977 | 15.0 | 46020 | 1.2448 | 0.6158 |
| 0.974 | 16.0 | 49088 | 1.3261 | 0.6206 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_pretrain_rte
|
gokuls
| 2023-01-30T14:18:02Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-30T14:14:47Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_sa_GLUE_Experiment_logit_kd_pretrain_rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.5451263537906137
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_pretrain_rte
This model is a fine-tuned version of [gokuls/mobilebert_sa_pre-training-complete](https://huggingface.co/gokuls/mobilebert_sa_pre-training-complete) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3884
- Accuracy: 0.5451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4107 | 1.0 | 20 | 0.3951 | 0.5126 |
| 0.3757 | 2.0 | 40 | 0.3914 | 0.4982 |
| 0.347 | 3.0 | 60 | 0.3884 | 0.5451 |
| 0.3072 | 4.0 | 80 | 0.4022 | 0.5126 |
| 0.2762 | 5.0 | 100 | 0.4116 | 0.5271 |
| 0.2457 | 6.0 | 120 | 0.4073 | 0.5271 |
| 0.2215 | 7.0 | 140 | 0.4115 | 0.5487 |
| 0.2059 | 8.0 | 160 | 0.4231 | 0.5343 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ahmetayrnc/bert-large-cased
|
ahmetayrnc
| 2023-01-30T13:58:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:silicone",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-30T13:16:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- silicone
metrics:
- accuracy
model-index:
- name: bert-large-cased
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: silicone
type: silicone
config: swda
split: test
args: swda
metrics:
- name: Accuracy
type: accuracy
value: 0.7280766396462786
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the silicone dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8807
- Accuracy: 0.7281
- Micro-precision: 0.7281
- Micro-recall: 0.7281
- Micro-f1: 0.7281
- Macro-precision: 0.4591
- Macro-recall: 0.3825
- Macro-f1: 0.3855
- Weighted-precision: 0.6943
- Weighted-recall: 0.7281
- Weighted-f1: 0.6977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Micro-precision | Micro-recall | Micro-f1 | Macro-precision | Macro-recall | Macro-f1 | Weighted-precision | Weighted-recall | Weighted-f1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 0.8835 | 1.0 | 2980 | 0.8807 | 0.7281 | 0.7281 | 0.7281 | 0.7281 | 0.4591 | 0.3825 | 0.3855 | 0.6943 | 0.7281 | 0.6977 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
sarthakc44/q-Taxi-v3-500x6-v2
|
sarthakc44
| 2023-01-30T13:48:24Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-30T13:48:20Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-500x6-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sarthakc44/q-Taxi-v3-500x6-v2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
inkasaras/ppo-LunarLander-v2
|
inkasaras
| 2023-01-30T13:48:17Z | 1 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-28T13:59:47Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 275.83 +/- 20.42
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.