modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Bosio/full-sentence-distillroberta3-finetuned-wikitext2 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-12-30T18:26:53Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: Nishant91/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BossLee/t5-gec | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 6 | 2022-12-30T18:26:58Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('bobber/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
BotterHax/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2022-12-30T18:38:38Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="zzmez/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Branex/gpt-neo-2.7B | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-12-30T18:38:41Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="srandazzo/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BrianTin/MTBERT | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | 2022-12-30T19:00:04Z | ---
license: apache-2.0
---
Actually not a bug: https://github.com/NVIDIA/TensorRT/issues/2577 |
BritishLibraryLabs/bl-books-genre | [
"pytorch",
"distilbert",
"text-classification",
"multilingual",
"dataset:blbooksgenre",
"transformers",
"genre",
"books",
"library",
"historic",
"glam ",
"lam",
"license:mit",
"has_space"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 76 | 2022-12-30T19:05:17Z | ---
language:
- en
license: apache-2.0
library_name: transformers
datasets:
- kmfoda/booksum
pipeline_tag: summarization
model-index:
- name: pszemraj/tglobal-large-booksum-WIP4-r1
results:
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- type: rouge
value: 13.4352
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTc2OGIyNzZkN2ViMmIwMGE0ZGIzNDAzZjE0NDQ4YzU5NTA1NGIxYjUyODc1ZTJmZjAxNWFlZjJjYTllMTI5NyIsInZlcnNpb24iOjF9.kC9UAdWPCeNt_jc8tjndrDGnwiuvlU4K2Weun8AyVZobRQUSqyV9QBIBieKMa9uYCJjnx1GeDLWi9C8N6JC3AA
- type: rouge
value: 0.8815
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmNiNTNjNzI3ZDgzMDAwNmM5ZGZiNzk5M2JhYjM4NWE3MDI5OGFmZmZmZTYyZDA0MjJlNDYxZTY0ZjVjNzA4OSIsInZlcnNpb24iOjF9.fKY2Hg9itbCugjx8-PhX3ZAHFOdSzK5Ysx2Xz0KDj5KVXtChuhwWmhXCCwJhlUiC6Vvkqtg8sn2Be9OZCnyEBQ
- type: rouge
value: 9.619
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzVmNTZkZmYzOGQyMjBhYjMyN2U5NmUxZTMxY2RhYzk4ODUyNWFmMjU2YWM3ZGI3ZGU2MmRlMDBhMjZhOThiMiIsInZlcnNpb24iOjF9.daljxkI6dZ0WqOvi7t99ABkyRcnBbwBKx3jSHaIJW8mojpWFr3PB5aheDB84KWy-IvvaVERrLLBq4nWpMvxoBA
- type: rouge
value: 11.597
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjE3NzUxOTVlMGVlZTdlOWQ5NTA4MDc5ZTljYzY3NDI1ZWMzNTUyODEwZGE1OWEzODEwNzEyNTBhMDY0NDliNiIsInZlcnNpb24iOjF9.ZVxD5mF_XhtirDulXNGfkZOSPyQovtUoYnvFyXFRPiAdjfAI9chvX770joAoUJ4stTxSIhlCY6wfxJmvoZudDQ
- type: loss
value: 5.3879194259643555
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTRmZDE2Y2VhOGQ0NjcxNDJmMDg0NDEyOTMxZDdhOTc0ZDlmZTJlMDE5NjJmZTc5ZjZjMmI5Mjc2MmIxZGVhMiIsInZlcnNpb24iOjF9.oTfN-B3mr_QrUtakbnSNMW2_2rq35-vzSuf7mwFZ0GF-NnFqv_tiZGphXzQxjrSQrxFZtvxiY5hlLZAdRGCdDg
- type: gen_len
value: 42.5849
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTE3YmUxNzQ4ZTVkNjNiYzRkODJhZmI0YmNjZmIxZjQ3M2MyNDI0MTA2OTJkNjI0MDE2MWY1MjUxYTU1YzJjYiIsInZlcnNpb24iOjF9.g2bpwOV1F2eQEZYw3jSJ7DiKD1GelyV7UYaa_8UtpuLk-zfTWISb39mR2kSr_k-ZhjOrutyJT2QrBF550PAKAA
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- type: rouge
value: 27.7008
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDMzNjhjMjcyM2RhNzQyOTY4ZGNjYjYwYzY4MThiMWVlYWNiNzUwZWIxYjdiZDcwOTZiZjQyYzgyNTNmZjNlNiIsInZlcnNpb24iOjF9.4vJrcTLCMI4J_oEb1Pw_gVbAtOwui0SOAHm-aGOiLI_QtB0lk7xaEadQKAhIDDEZ1TIVW8n8umbKzUdjObtbAQ
- type: rouge
value: 2.8545
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTUwNGY0N2UzYmY3MGFjZTk5N2ZkYWMwYjk5YjFkNDgyNDc5YzIxYTYyYmViNGE3N2M0ZTc5ODRmNDlkMzNlMiIsInZlcnNpb24iOjF9.zfXkSVujZU6zjsKV71UFsk_vMY5nBbcUjaHUs0x2WyaIK9fDz_nfVX57l3rXDhETKm6v3ZrdafM5XBnNDPwWBg
- type: rouge
value: 12.6912
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDk1YzIxZGZjYmNkN2YyNjYyNjk0MjI5ZTUxYWFmYTkzMTMzMDM2MDIzMTJkMWQ3YzdiOWFmZTZiMDc4NzJjZSIsInZlcnNpb24iOjF9.ZKJFzfz8p_YD3KKb0TTaDNpUCZRoEpMs9pjgLg2IAusyNgmCUe6TJgFvZoUJlzsPaUplhlQnZGm4NFcqjqaiDQ
- type: rouge
value: 25.3203
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWExM2MzM2Y0NmE4NmI1MTI4YTYwZjhhN2ZiOWZlZTE4M2E0M2Y4OWY0NDI2ZTZmNzA2NTliM2E3M2MxMzU2OCIsInZlcnNpb24iOjF9.wovBXHo6A3kqWlGa_0cbzTmh6raYlmuKM08EOIHwEKgh4bfO-TiyZyhDFy-uF-v9hF52j5A4YgUHRDjiFj1UDQ
- type: loss
value: 4.574217319488525
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2EwYzZjOWQxZDVlZGU4NWUxNTkyODg2ZDNhMTk2YWM0MGYxMDk0NzMyYTBlNzRmMWY1ZWNmMGE2NTY4NmYyZCIsInZlcnNpb24iOjF9.wwiAePQ6OrSYO_PG-4AK_hTpb7oBQK1CgXMjFhs49g8q0KqH0iwiUhpA1prXZzWDqngvL-SHbFZEyDBcqPzfDA
- type: gen_len
value: 144.9217
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjdkOTA2NTc4NjFjMmMxYzNkZTUyZjA4NDAzMjBhYTc3NDRjZWMzMzk0NzhhMjVlYTE4NGZiMzdlZTUyYjhkMSIsInZlcnNpb24iOjF9.YmlZxU0f-rBp1Ywav5jlMe2YpxXbTAqyPYt88ItYzEqfAI6ZCKEsm2ESX80txzZ3O2Qwp6ouYszKJCfhb1qNDw
---
# pszemraj/tglobal-large-booksum-WIP4-r1
WIP - not ready for downstream use
**all metrics added here are for purposes of _comparing_ vs. eval metrics on datasets _later_ after further training.** |
Broadus20/DialoGPT-small-joshua | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | 2022-12-30T19:19:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- f1
model-index:
- name: vit-base-riego
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: MaxP--agro_riego
split: test
args: MaxP--agro_riego
metrics:
- name: F1
type: f1
value: 0.37288135593220334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-riego
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2998
- F1: 0.3729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1696 | 0.79 | 100 | 1.1385 | 0.352 |
| 0.08 | 1.59 | 200 | 0.9071 | 0.3774 |
| 0.0928 | 2.38 | 300 | 1.1181 | 0.3454 |
| 0.0189 | 3.17 | 400 | 0.8262 | 0.3425 |
| 0.0728 | 3.97 | 500 | 0.9647 | 0.3747 |
| 0.0756 | 4.76 | 600 | 0.6097 | 0.4776 |
| 0.0018 | 5.56 | 700 | 1.3900 | 0.3652 |
| 0.002 | 6.35 | 800 | 0.7498 | 0.4606 |
| 0.0304 | 7.14 | 900 | 1.4367 | 0.3666 |
| 0.0024 | 7.94 | 1000 | 1.5714 | 0.3041 |
| 0.0463 | 8.73 | 1100 | 0.8038 | 0.4016 |
| 0.0014 | 9.52 | 1200 | 0.7175 | 0.4795 |
| 0.0015 | 10.32 | 1300 | 1.0347 | 0.3959 |
| 0.0009 | 11.11 | 1400 | 1.3881 | 0.3670 |
| 0.0131 | 11.9 | 1500 | 1.0780 | 0.4044 |
| 0.0007 | 12.7 | 1600 | 0.9834 | 0.4255 |
| 0.0011 | 13.49 | 1700 | 1.0753 | 0.4033 |
| 0.0007 | 14.29 | 1800 | 1.1514 | 0.3989 |
| 0.0007 | 15.08 | 1900 | 1.2373 | 0.3769 |
| 0.0007 | 15.87 | 2000 | 1.2998 | 0.3729 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Brona/poc_de | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-12-30T19:30:54Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 639.50 +/- 125.55
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga claterza -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga claterza -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga claterza
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.5),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 9.5e-05),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Bryan190/Aguy190 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-12-30T19:43:30Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### rbto3v5 Dreambooth model trained by rudzinskimaciej with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Bryanwong/wangchanberta-ner | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-12-30T19:44:28Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 767.50 +/- 321.09
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga numan966 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga numan966 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga numan966
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0002),
('learning_starts', 100000),
('n_timesteps', 3000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Brykee/DialoGPT-medium-Morty | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2022-12-30T19:48:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2_sv
model-index:
- name: flan-t5-base-squad-swe2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-squad-swe2
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the squad_v2_sv dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0881 | 1.0 | 890 | 1.6422 |
| 1.7772 | 2.0 | 1780 | 1.5586 |
| 1.6763 | 3.0 | 2670 | 1.5153 |
| 1.6215 | 4.0 | 3560 | 1.4852 |
| 1.5912 | 5.0 | 4450 | 1.4629 |
| 1.5651 | 6.0 | 5340 | 1.4481 |
| 1.5407 | 7.0 | 6230 | 1.4374 |
| 1.5278 | 8.0 | 7120 | 1.4308 |
| 1.5137 | 9.0 | 8010 | 1.4269 |
| 1.5116 | 10.0 | 8900 | 1.4248 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Bubb-les/DisloGPT-medium-HarryPotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2022-12-30T19:51:35Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zzmez/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BumBelDumBel/TRUMP | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: mit
---
Port of the original lilt-only-base model weights from the [Language-Independent Layout Transformer (LiLT)](https://arxiv.org/pdf/2202.13669v1.pdf)
The weights found here are not useful as a standalone and should be instead used in combination with Roberta-like models as outlined [HERE](https://github.com/jpWang/LiLT#or-generate-your-own-checkpoint-optional)
This repository aims to make it easier for others to combine LiLT with a Roberta-like model of their liking.
Please refer to the following script on how to fuse XLM-Roberta with LiLT for multi-modal training/fine-tuning [HERE](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LiLT/Create_LiLT_%2B_XLM_RoBERTa_base.ipynb) |
Bwehfuk/Ron | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-12-30T20:19:09Z | ---
license: creativeml-openrail-m
language:
- en
tags:
- stable-diffusion
- text-to-image
--- |
CALM/backup | [
"lean_albert",
"transformers"
]
| null | {
"architectures": [
"LeanAlbertForPretraining",
"LeanAlbertForTokenClassification",
"LeanAlbertForSequenceClassification"
],
"model_type": "lean_albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2022-12-30T20:21:03Z | ---
language:
- en
thumbnail: "https://huggingface.co/wavymulder/timeless-diffusion/resolve/main/imgs/page1.jpg"
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- safetensors
- diffusers
inference: true
---
**Timeless Diffusion**

[*CKPT DOWNLOAD LINK*](https://huggingface.co/wavymulder/timeless-diffusion/resolve/main/timeless-1.0.ckpt) - This is a dreambooth model trained on a diverse set of colourized photographs from the 1880s-1980s.
Use the activation token **timeless style** in your prompt (I recommend at the start)
The goal of this model was to create striking images with rich tones and an anachronistic feel.
When using this model, I typically use **painted illustration blur haze monochrome** in my negative prompt. I encourage you to experiment and see what works well for you.
Trained from 1.5 with VAE.
Please see [this document where I share the parameters (prompt, sampler, seed, etc.) used for all example images.](https://huggingface.co/wavymulder/timeless-diffusion/resolve/main/parameters_for_samples.txt)
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run Timeless Diffusion:
[](https://huggingface.co/spaces/wavymulder/timeless-diffusion)


|
CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 42 | 2022-12-30T20:25:10Z | <h1>CharTurner</h1>
Hey there! I'm a working artist, and I loathe doing character turnarounds, I find it the least fun part of character design. I've been working on an embedding that helps with this process, and, though it's not where I want it to be, I was encouraged to release it under the Minimum Viable Product principle.
I'm also working on a few more character embeddings, including a head turn around and an expression sheet. They're still way too raw to release tho.
Is there some type of embedding that would be useful for you? Let me know, i'm having fun making tools to fix all the stuff I hate doing by hand.
<h2>v1 is still a little bit... fiddly.</h2>
**Sampler: I use DPM++ 2m Karras or DDIM most often.
**Highres. fix ON for best results
**landscape orientation will get you more 'turns'; square images tend toward just front and back.
I like https://civitai.com/models/2540/elldreths-stolendreams-mix to make characters in.
I use an embedding trained on my own art (smoose) that I will release if people want it? But it's an aesthetic thing, just my own vibe.
I didn't really test this in any of the waifu/NAI type models, as I don't usually use them. Looks like it works but it probably has its own special dance.
<h1>Things I'm working on for v2:</h1>
It fights you on style sometimes. I'm adding more various types of art styles to the dataset to combat this.
Open front coats and such tend to be open 'back' on the back view. Adding more types of clothing to the dataset to combat this.
Tends toward white and 'fit' characters, which isn't useful. Adding more diversity in body and skin tone to the dataset to combat this.
Helps create multiple full body views of a character. The intention is to get at least a front and back, and ideally, a front, 3/4, profile, 1/4 and back versions, in the same outfit. |
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 18 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.01 +/- 10.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-ca | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 580 | 2022-12-30T20:57:02Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.61 +/- 0.49
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="BrandonJaip/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da-ner | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 42 | 2022-12-30T21:07:39Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -11.89 +/- 3.15
name: mean_reward
verified: false
---
# **TQC** Agent playing **PandaReachDense-v2**
This is a trained model of a **TQC** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da-poetry | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 37 | 2022-12-30T21:12:10Z | ---
license: cc-by-nc-4.0
---
# NPM-single
NPM-single is a nonparametric masked language model, pretrained on English text data.
It was introduced by ["Nonparametric Masked Language Modeling"][paper]
and first released in [facebookresearch/NPM][repo].
### Model description
NPM consists of an encoder and a reference corpus, and models a nonparametric distribution over a reference corpus.
The key idea is to map all the phrases in the corpus into a dense vector space using the
encoder and, when given a query with a MASK at inference, use the encoder to locate the nearest
phrase from the corpus and fill in the MASK.
NPM-single is a variant of NPM that retrieves a token from the corpus, instead of a phrase.
### Intended uses & limitations
While this repo includes the encoder weights, NPM-single has to be used together with a datstore.
For more details on how to use NPM-single, please refer to the [original repo][repo].
Note that this model is primarily for filling in a MASK token. Future work can investigate how to use NPM-single for text generation.
### Training procedure
NPM-single was trained on English Wikipedia (August 2019) and an English portion of CC-News (Mackenzie et al. (2020), February 2019), which contains 13B tokens in total.
NPM-single used the model architecture and initial weights of RoBERTa large (Liu et al., 2019), consisting of 354M parameters.
Training is done for 100,000 steps, using thirty-two 32GB GPUs.
More details about training can be found in the [paper][paper].
Code for training NPM-single can be found in the [original repo][repo].
### Evaluation results
NPM-single is evaluated on nine closed-set tasks (tasks with a small set of options given).
NPM-single consistently outperforms significantly larger models such as GPT-3 and T5.
Detailed results can be found from the [paper][paper].
### BibTeX entry and citation info
```
@article{ min2022nonparametric,
title={ Nonparametric Masked Language Modeling },
author={ Min, Sewon and Shi, Weijia and Lewis, Mike and Chen, Xilun and Yih, Wen-tau and Hajishirzi, Hannaneh and Zettlemoyer, Luke },
year={ 2022 }
}
```
[paper]: https://arxiv.org/abs/2212.01349
[repo]: https://github.com/facebookresearch/NPM
|
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 32 | null | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.32 +/- 0.99
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="lvodoleyl/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 449 | 2022-12-30T21:35:35Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="arb9p4/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6 | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 34 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="arb9p4/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | 2022-12-30T22:08:32Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### CRonaldolibya Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 132 | 2022-12-30T22:39:22Z | ---
license: openrail
---
---
license: openrail
---
The embedding were trained using A1111 TI for the 768px Stable Diffusion v2.0 model.
The embedding should work on any model that uses SD v2.0 as a base.
**TungstenDispo (v1)**
The TungstenDispo embedding were trained for 1000 epochs with a gradient batch size of 50.
A total of ~100 training images of tungsten photographs taken with CineStill 800T were used. The split was around 50/50 people landscapes.
The effect isn't quite the tungsten photo effect I was going for, but creates very nice, artistic portraits of people. For some of the people, I used SoCalGuitarist's [Negative FaceLift](https://civitai.com/models/2385/socalguitarists-magic-facelift-negative-embedding-for-model-2x-fix-yo-ugly-faces) as a negative embedding.
I used it on 0.3 strength, and it seems like it makes the eyes slightly less wonky. Unclear extent of effect.
Landscapes haven't been experimented with much and are WIP.
<div align="center">
<img src="https://huggingface.co/fzbuzz/TungstenDispo-embedding-sd-v2-1/resolve/main/00000.png">
</div>
<div align="center">
<img src="https://huggingface.co/fzbuzz/TungstenDispo-embedding-sd-v2-1/resolve/main/00001.png">
</div>
<div align="center">
<img src="https://huggingface.co/fzbuzz/TungstenDispo-embedding-sd-v2-1/resolve/main/00002.png">
</div>
<div align="center">
<img src="https://huggingface.co/fzbuzz/TungstenDispo-embedding-sd-v2-1/resolve/main/00003.png">
</div>
---
**Workflow for Above Pictures**
Sampler: Euler-A, 20 Steps, CFG: 7.0. Slightly cherry-picked for best pictures.
900x768 -> 4x LDSR upscaled
Negative Prompt for all images (Not entirely sure if all of them matter, but does help a bit):
> (Neg_Facelift768:0.3), (blur:0.3), (cropped:1.3), (ugly:1.3), (bad anatomy:1.2), (disfigured:1.1), (deformed:1.1), (bad proportions:1.3), (extra limbs:1.2), (missing fingers:1.2), (extra fingers:1.2), (out of frame:1.3), (makeup:1.1)
Positive Prompts:
First Image:
>TungstenDispo, photoshoot of a asian female model with white hair, in a dark room, (closeup:0.2)
Second Image:
>(TungstenDispo:1.3), photoshoot of a (model:0.7), posed, in a dark room, highly detailed, (closeup:0.2), (skin pores:0.5)
Third Image:
>(TungstenDispo:1.2), photoshoot of a model, posed, in a dark room, (closeup:0.2)
Fourth Image:
>TungstenDispo, photoshoot of a male model, posed, in a dark room, (closeup:0.2)
**Usage for A1111 WebUI**
Download the TungstenDispo.pt file and put in embeddings/. Prepend "TungstenDispo" at start of prompt.
**More People Samples w/out exact workflow**
Pretty much the same, only changed up subject a little + weights.
<div align="center">
<img src="https://huggingface.co/fzbuzz/TungstenDispo-embedding-sd-v2-1/resolve/main/people_sample.png">
</div>
**Landscape Demo:**
Honestly did not turn out as good as anticipated. It really loves neon signs. Stay tuned for more exploration on landscapes :p
Sampler: DDIM, 80 Steps, CFG: 7.0. Slightly cherry-picked for best pictures.
768x1024
Positive Prompt (bolded part altered for image)
>(TungstenStyle:1.5), cinematic shot of a (empty:1.5) **symmetric glass foyer at night with light faintly shining through the windows**, highly detailed
Negative Prompt:
>billboards, signs, words, (blur:0.2)
<div align="center">
<img src="https://huggingface.co/fzbuzz/TungstenDispo-embedding-sd-v2-1/resolve/main/background_sample.png">
</div>
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,862 | 2022-12-30T22:43:08Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-q
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="BrandonJaip/Taxi-v3-q", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 855 | 2022-12-30T22:45:30Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: t123ovebw
---
### tovebw Dreambooth model trained by duja1 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
t123ovebw (use that on your prompt)

|
CAMeL-Lab/bert-base-arabic-camelbert-msa-ner | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 229 | null | ---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
metrics:
- type: mean_reward
value: 5.08 +/- 26.77
name: mean_reward
verified: false
---
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 133 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 783.00 +/- 211.76
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Bunkerj -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Bunkerj -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Bunkerj
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="johnhudzinatr/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CAMeL-Lab/bert-base-arabic-camelbert-msa | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,967 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Jac0b/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CAUKiel/JavaBERT | [
"pytorch",
"safetensors",
"bert",
"fill-mask",
"code",
"arxiv:2110.10404",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 388 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3-qtable
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Jac0b/taxi-v3-qtable", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CL/safe-math-bot | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: es
license: gpl-3.0
tags:
- PyTorch
- Transformers
- Token Classification
- roberta
- roberta-base-bne
widget:
- text: "Fue antes de llegar a Sigüeiro, en el Camino de Santiago."
- text: "El proyecto lo financia el Ministerio de Industria y Competitividad."
model-index:
- name: roberta-bne-ner-cds
results: []
---
# Introduction
This model is a fine-tuned version of [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) for Named-Entity Recognition, in the domain of tourism related to the Way of Saint Jacques. It recognizes four types of entities: location (LOC), organizations (ORG), person (PER) and miscellaneous (MISC).
## Usage
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("roberta-bne-ner-cds")
model = AutoModelForTokenClassification.from_pretrained("roberta-bne-ner-cds")
example = "Fue antes de llegar a Sigüeiro, en el Camino de Santiago. El proyecto lo financia el Ministerio de Industria y Competitividad."
ner_pipe = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
for ent in ner_pipe(example):
print(ent)
```
## Dataset
ToDo
## Model performance
entity|precision|recall|f1
-|-|-|-
PER|0.965|0.924|0.944
ORG|0.900|0.701|0.788
LOC|0.982|0.985|0.983
MISC|0.798|0.874|0.834
micro avg|0.964|0.968|0.966|4265
macro avg|0.911|0.871|0.887
weighted avg|0.965|0.968|0.966
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
CLAck/en-vi | [
"pytorch",
"marian",
"text2text-generation",
"en",
"vi",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2023-01-02T10:51:40Z | ---
tags:
- generated_from_trainer
model-index:
- name: layoutlm-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1473
- Ame: {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13}
- Anguage: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}
- Areer: {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14}
- Ddress: {'precision': 0.6122448979591837, 'recall': 0.7894736842105263, 'f1': 0.6896551724137931, 'number': 38}
- Du Degree: {'precision': 0.8245614035087719, 'recall': 0.7966101694915254, 'f1': 0.8103448275862069, 'number': 59}
- Du End Date: {'precision': 0.7272727272727273, 'recall': 0.8421052631578947, 'f1': 0.7804878048780488, 'number': 19}
- Du Start Date: {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 3}
- Du University: {'precision': 0.8222222222222222, 'recall': 0.8809523809523809, 'f1': 0.8505747126436781, 'number': 84}
- Hone: {'precision': 0.7083333333333334, 'recall': 0.8095238095238095, 'f1': 0.7555555555555556, 'number': 21}
- Kill: {'precision': 0.7416666666666667, 'recall': 0.5913621262458472, 'f1': 0.6580406654343807, 'number': 301}
- Mail: {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15}
- Umarize: {'precision': 0.6891891891891891, 'recall': 0.9532710280373832, 'f1': 0.8, 'number': 214}
- X Company: {'precision': 0.75177304964539, 'recall': 0.848, 'f1': 0.7969924812030075, 'number': 125}
- X Description: {'precision': 0.8356118706235411, 'recall': 0.9586840091813313, 'f1': 0.892927133440228, 'number': 2614}
- X End Date: {'precision': 0.875, 'recall': 0.8484848484848485, 'f1': 0.8615384615384615, 'number': 33}
- X Location: {'precision': 0.8, 'recall': 0.9090909090909091, 'f1': 0.8510638297872342, 'number': 66}
- X Position: {'precision': 0.7593984962406015, 'recall': 0.8782608695652174, 'f1': 0.8145161290322581, 'number': 115}
- X Start Date: {'precision': 0.7142857142857143, 'recall': 0.8974358974358975, 'f1': 0.7954545454545455, 'number': 39}
- Overall Precision: 0.8082
- Overall Recall: 0.9102
- Overall F1: 0.8562
- Overall Accuracy: 0.8177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Ame | Anguage | Areer | Ddress | Du Degree | Du End Date | Du Start Date | Du University | Hone | Kill | Mail | Umarize | X Company | X Description | X End Date | X Location | X Position | X Start Date | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.7239 | 1.0 | 36 | 0.9258 | {'precision': 0.5, 'recall': 0.8461538461538461, 'f1': 0.6285714285714286, 'number': 13} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 14} | {'precision': 0.36363636363636365, 'recall': 0.42105263157894735, 'f1': 0.3902439024390244, 'number': 38} | {'precision': 0.6595744680851063, 'recall': 0.5254237288135594, 'f1': 0.5849056603773585, 'number': 59} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 19} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.5692307692307692, 'recall': 0.44047619047619047, 'f1': 0.4966442953020134, 'number': 84} | {'precision': 1.0, 'recall': 0.09523809523809523, 'f1': 0.17391304347826084, 'number': 21} | {'precision': 0.7513513513513513, 'recall': 0.46179401993355484, 'f1': 0.5720164609053497, 'number': 301} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 15} | {'precision': 0.6703296703296703, 'recall': 0.8551401869158879, 'f1': 0.7515400410677617, 'number': 214} | {'precision': 0.5459459459459459, 'recall': 0.808, 'f1': 0.6516129032258065, 'number': 125} | {'precision': 0.7384172661870504, 'recall': 0.981637337413925, 'f1': 0.8428313351946133, 'number': 2614} | {'precision': 0.8181818181818182, 'recall': 0.2727272727272727, 'f1': 0.4090909090909091, 'number': 33} | {'precision': 0.5825242718446602, 'recall': 0.9090909090909091, 'f1': 0.7100591715976331, 'number': 66} | {'precision': 0.7654320987654321, 'recall': 0.5391304347826087, 'f1': 0.6326530612244898, 'number': 115} | {'precision': 0.41975308641975306, 'recall': 0.8717948717948718, 'f1': 0.5666666666666667, 'number': 39} | 0.7108 | 0.8614 | 0.7789 | 0.7166 |
| 0.5611 | 2.0 | 72 | 0.8617 | {'precision': 0.41379310344827586, 'recall': 0.9230769230769231, 'f1': 0.5714285714285715, 'number': 13} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 14} | {'precision': 0.5283018867924528, 'recall': 0.7368421052631579, 'f1': 0.6153846153846154, 'number': 38} | {'precision': 0.7704918032786885, 'recall': 0.7966101694915254, 'f1': 0.7833333333333333, 'number': 59} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 19} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.7857142857142857, 'recall': 0.6547619047619048, 'f1': 0.7142857142857143, 'number': 84} | {'precision': 0.8181818181818182, 'recall': 0.42857142857142855, 'f1': 0.5625, 'number': 21} | {'precision': 0.5519287833827893, 'recall': 0.6179401993355482, 'f1': 0.5830721003134797, 'number': 301} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 15} | {'precision': 0.6428571428571429, 'recall': 0.883177570093458, 'f1': 0.7440944881889764, 'number': 214} | {'precision': 0.5380116959064327, 'recall': 0.736, 'f1': 0.6216216216216216, 'number': 125} | {'precision': 0.8103896103896104, 'recall': 0.954858454475899, 'f1': 0.8767123287671232, 'number': 2614} | {'precision': 0.75, 'recall': 0.9090909090909091, 'f1': 0.821917808219178, 'number': 33} | {'precision': 0.6176470588235294, 'recall': 0.9545454545454546, 'f1': 0.75, 'number': 66} | {'precision': 0.5818181818181818, 'recall': 0.8347826086956521, 'f1': 0.6857142857142857, 'number': 115} | {'precision': 0.5230769230769231, 'recall': 0.8717948717948718, 'f1': 0.6538461538461539, 'number': 39} | 0.7450 | 0.8842 | 0.8087 | 0.7601 |
| 0.3611 | 3.0 | 108 | 0.7966 | {'precision': 0.3870967741935484, 'recall': 0.9230769230769231, 'f1': 0.5454545454545454, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 14} | {'precision': 0.4716981132075472, 'recall': 0.6578947368421053, 'f1': 0.5494505494505494, 'number': 38} | {'precision': 0.7313432835820896, 'recall': 0.8305084745762712, 'f1': 0.7777777777777778, 'number': 59} | {'precision': 0.5, 'recall': 0.05263157894736842, 'f1': 0.09523809523809525, 'number': 19} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.7254901960784313, 'recall': 0.8809523809523809, 'f1': 0.7956989247311828, 'number': 84} | {'precision': 0.55, 'recall': 0.5238095238095238, 'f1': 0.5365853658536585, 'number': 21} | {'precision': 0.6604477611940298, 'recall': 0.5880398671096345, 'f1': 0.6221441124780316, 'number': 301} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 15} | {'precision': 0.5946745562130178, 'recall': 0.9392523364485982, 'f1': 0.7282608695652175, 'number': 214} | {'precision': 0.7815126050420168, 'recall': 0.744, 'f1': 0.7622950819672131, 'number': 125} | {'precision': 0.8127781309599491, 'recall': 0.978194338179036, 'f1': 0.8878472222222223, 'number': 2614} | {'precision': 0.7428571428571429, 'recall': 0.7878787878787878, 'f1': 0.7647058823529412, 'number': 33} | {'precision': 0.759493670886076, 'recall': 0.9090909090909091, 'f1': 0.8275862068965516, 'number': 66} | {'precision': 0.7559055118110236, 'recall': 0.8347826086956521, 'f1': 0.793388429752066, 'number': 115} | {'precision': 0.5223880597014925, 'recall': 0.8974358974358975, 'f1': 0.660377358490566, 'number': 39} | 0.7672 | 0.9057 | 0.8307 | 0.7855 |
| 0.2638 | 4.0 | 144 | 0.8725 | {'precision': 0.6, 'recall': 0.9230769230769231, 'f1': 0.7272727272727274, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 1.0, 'recall': 0.14285714285714285, 'f1': 0.25, 'number': 14} | {'precision': 0.5, 'recall': 0.7105263157894737, 'f1': 0.5869565217391304, 'number': 38} | {'precision': 0.7704918032786885, 'recall': 0.7966101694915254, 'f1': 0.7833333333333333, 'number': 59} | {'precision': 0.6666666666666666, 'recall': 0.21052631578947367, 'f1': 0.32, 'number': 19} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.7564102564102564, 'recall': 0.7023809523809523, 'f1': 0.728395061728395, 'number': 84} | {'precision': 0.6, 'recall': 0.5714285714285714, 'f1': 0.5853658536585366, 'number': 21} | {'precision': 0.7051792828685259, 'recall': 0.5880398671096345, 'f1': 0.6413043478260869, 'number': 301} | {'precision': 0.8181818181818182, 'recall': 0.6, 'f1': 0.6923076923076923, 'number': 15} | {'precision': 0.6891891891891891, 'recall': 0.9532710280373832, 'f1': 0.8, 'number': 214} | {'precision': 0.6645962732919255, 'recall': 0.856, 'f1': 0.7482517482517483, 'number': 125} | {'precision': 0.7947204968944099, 'recall': 0.9789594491201224, 'f1': 0.8772711690092561, 'number': 2614} | {'precision': 0.7, 'recall': 0.8484848484848485, 'f1': 0.7671232876712328, 'number': 33} | {'precision': 0.7375, 'recall': 0.8939393939393939, 'f1': 0.8082191780821918, 'number': 66} | {'precision': 0.8392857142857143, 'recall': 0.8173913043478261, 'f1': 0.8281938325991189, 'number': 115} | {'precision': 0.5625, 'recall': 0.9230769230769231, 'f1': 0.6990291262135923, 'number': 39} | 0.7677 | 0.9107 | 0.8331 | 0.7838 |
| 0.1886 | 5.0 | 180 | 0.7744 | {'precision': 0.75, 'recall': 0.9230769230769231, 'f1': 0.8275862068965517, 'number': 13} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.14285714285714285, 'f1': 0.23529411764705882, 'number': 14} | {'precision': 0.6, 'recall': 0.7105263157894737, 'f1': 0.6506024096385543, 'number': 38} | {'precision': 0.8541666666666666, 'recall': 0.6949152542372882, 'f1': 0.7663551401869159, 'number': 59} | {'precision': 0.5714285714285714, 'recall': 0.21052631578947367, 'f1': 0.3076923076923077, 'number': 19} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.7558139534883721, 'recall': 0.7738095238095238, 'f1': 0.7647058823529412, 'number': 84} | {'precision': 0.56, 'recall': 0.6666666666666666, 'f1': 0.6086956521739131, 'number': 21} | {'precision': 0.686046511627907, 'recall': 0.5880398671096345, 'f1': 0.6332737030411449, 'number': 301} | {'precision': 0.75, 'recall': 0.6, 'f1': 0.6666666666666665, 'number': 15} | {'precision': 0.6950354609929078, 'recall': 0.9158878504672897, 'f1': 0.7903225806451614, 'number': 214} | {'precision': 0.7012987012987013, 'recall': 0.864, 'f1': 0.7741935483870969, 'number': 125} | {'precision': 0.811411992263056, 'recall': 0.9628921193573068, 'f1': 0.8806857942617214, 'number': 2614} | {'precision': 0.7777777777777778, 'recall': 0.8484848484848485, 'f1': 0.8115942028985507, 'number': 33} | {'precision': 0.8333333333333334, 'recall': 0.9090909090909091, 'f1': 0.8695652173913043, 'number': 66} | {'precision': 0.7674418604651163, 'recall': 0.8608695652173913, 'f1': 0.8114754098360657, 'number': 115} | {'precision': 0.6271186440677966, 'recall': 0.9487179487179487, 'f1': 0.7551020408163266, 'number': 39} | 0.7834 | 0.9001 | 0.8377 | 0.7966 |
| 0.1446 | 6.0 | 216 | 0.8311 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.75, 'recall': 0.42857142857142855, 'f1': 0.5454545454545454, 'number': 14} | {'precision': 0.625, 'recall': 0.6578947368421053, 'f1': 0.6410256410256411, 'number': 38} | {'precision': 0.8, 'recall': 0.8813559322033898, 'f1': 0.8387096774193548, 'number': 59} | {'precision': 0.6666666666666666, 'recall': 0.631578947368421, 'f1': 0.6486486486486486, 'number': 19} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.8222222222222222, 'recall': 0.8809523809523809, 'f1': 0.8505747126436781, 'number': 84} | {'precision': 0.56, 'recall': 0.6666666666666666, 'f1': 0.6086956521739131, 'number': 21} | {'precision': 0.694980694980695, 'recall': 0.5980066445182725, 'f1': 0.6428571428571429, 'number': 301} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 15} | {'precision': 0.6743421052631579, 'recall': 0.9579439252336449, 'f1': 0.7915057915057915, 'number': 214} | {'precision': 0.7938931297709924, 'recall': 0.832, 'f1': 0.8125, 'number': 125} | {'precision': 0.825065274151436, 'recall': 0.9671002295332823, 'f1': 0.8904543853469532, 'number': 2614} | {'precision': 0.8235294117647058, 'recall': 0.8484848484848485, 'f1': 0.8358208955223881, 'number': 33} | {'precision': 0.8082191780821918, 'recall': 0.8939393939393939, 'f1': 0.8489208633093526, 'number': 66} | {'precision': 0.8245614035087719, 'recall': 0.8173913043478261, 'f1': 0.8209606986899564, 'number': 115} | {'precision': 0.6296296296296297, 'recall': 0.8717948717948718, 'f1': 0.7311827956989246, 'number': 39} | 0.7980 | 0.9118 | 0.8511 | 0.8104 |
| 0.1165 | 7.0 | 252 | 0.6237 | {'precision': 0.8125, 'recall': 1.0, 'f1': 0.896551724137931, 'number': 13} | {'precision': 0.3333333333333333, 'recall': 1.0, 'f1': 0.5, 'number': 1} | {'precision': 0.8333333333333334, 'recall': 0.35714285714285715, 'f1': 0.5, 'number': 14} | {'precision': 0.58, 'recall': 0.7631578947368421, 'f1': 0.6590909090909091, 'number': 38} | {'precision': 0.78125, 'recall': 0.847457627118644, 'f1': 0.8130081300813008, 'number': 59} | {'precision': 0.6818181818181818, 'recall': 0.7894736842105263, 'f1': 0.7317073170731707, 'number': 19} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.8271604938271605, 'recall': 0.7976190476190477, 'f1': 0.8121212121212122, 'number': 84} | {'precision': 0.5769230769230769, 'recall': 0.7142857142857143, 'f1': 0.6382978723404256, 'number': 21} | {'precision': 0.6370106761565836, 'recall': 0.5946843853820598, 'f1': 0.6151202749140894, 'number': 301} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 15} | {'precision': 0.6207951070336392, 'recall': 0.9485981308411215, 'f1': 0.7504621072088725, 'number': 214} | {'precision': 0.8203125, 'recall': 0.84, 'f1': 0.8300395256916997, 'number': 125} | {'precision': 0.8961887477313975, 'recall': 0.9445294567712318, 'f1': 0.9197243434531569, 'number': 2614} | {'precision': 0.8, 'recall': 0.8484848484848485, 'f1': 0.823529411764706, 'number': 33} | {'precision': 0.8285714285714286, 'recall': 0.8787878787878788, 'f1': 0.8529411764705883, 'number': 66} | {'precision': 0.8598130841121495, 'recall': 0.8, 'f1': 0.8288288288288289, 'number': 115} | {'precision': 0.7555555555555555, 'recall': 0.8717948717948718, 'f1': 0.8095238095238095, 'number': 39} | 0.8373 | 0.8943 | 0.8648 | 0.8434 |
| 0.0945 | 8.0 | 288 | 0.8679 | {'precision': 0.7222222222222222, 'recall': 1.0, 'f1': 0.8387096774193548, 'number': 13} | {'precision': 0.25, 'recall': 1.0, 'f1': 0.4, 'number': 1} | {'precision': 0.8, 'recall': 0.2857142857142857, 'f1': 0.4210526315789473, 'number': 14} | {'precision': 0.5576923076923077, 'recall': 0.7631578947368421, 'f1': 0.6444444444444444, 'number': 38} | {'precision': 0.8148148148148148, 'recall': 0.7457627118644068, 'f1': 0.7787610619469028, 'number': 59} | {'precision': 0.7333333333333333, 'recall': 0.5789473684210527, 'f1': 0.6470588235294117, 'number': 19} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.8026315789473685, 'recall': 0.7261904761904762, 'f1': 0.7625, 'number': 84} | {'precision': 0.75, 'recall': 0.7142857142857143, 'f1': 0.7317073170731706, 'number': 21} | {'precision': 0.7962085308056872, 'recall': 0.5581395348837209, 'f1': 0.65625, 'number': 301} | {'precision': 0.8571428571428571, 'recall': 0.8, 'f1': 0.8275862068965518, 'number': 15} | {'precision': 0.717391304347826, 'recall': 0.9252336448598131, 'f1': 0.8081632653061225, 'number': 214} | {'precision': 0.7552447552447552, 'recall': 0.864, 'f1': 0.8059701492537312, 'number': 125} | {'precision': 0.8083148206918439, 'recall': 0.9743687834736037, 'f1': 0.8836079791847355, 'number': 2614} | {'precision': 0.7567567567567568, 'recall': 0.8484848484848485, 'f1': 0.8000000000000002, 'number': 33} | {'precision': 0.8235294117647058, 'recall': 0.8484848484848485, 'f1': 0.8358208955223881, 'number': 66} | {'precision': 0.8083333333333333, 'recall': 0.8434782608695652, 'f1': 0.8255319148936171, 'number': 115} | {'precision': 0.6923076923076923, 'recall': 0.9230769230769231, 'f1': 0.7912087912087913, 'number': 39} | 0.7943 | 0.9083 | 0.8475 | 0.8094 |
| 0.0775 | 9.0 | 324 | 0.9210 | {'precision': 0.6842105263157895, 'recall': 1.0, 'f1': 0.8125000000000001, 'number': 13} | {'precision': 0.3333333333333333, 'recall': 1.0, 'f1': 0.5, 'number': 1} | {'precision': 0.75, 'recall': 0.42857142857142855, 'f1': 0.5454545454545454, 'number': 14} | {'precision': 0.5833333333333334, 'recall': 0.7368421052631579, 'f1': 0.6511627906976745, 'number': 38} | {'precision': 0.8, 'recall': 0.8135593220338984, 'f1': 0.8067226890756303, 'number': 59} | {'precision': 0.7083333333333334, 'recall': 0.8947368421052632, 'f1': 0.7906976744186046, 'number': 19} | {'precision': 1.0, 'recall': 0.3333333333333333, 'f1': 0.5, 'number': 3} | {'precision': 0.8095238095238095, 'recall': 0.8095238095238095, 'f1': 0.8095238095238095, 'number': 84} | {'precision': 0.625, 'recall': 0.7142857142857143, 'f1': 0.6666666666666666, 'number': 21} | {'precision': 0.7741935483870968, 'recall': 0.5581395348837209, 'f1': 0.6486486486486487, 'number': 301} | {'precision': 0.7058823529411765, 'recall': 0.8, 'f1': 0.7500000000000001, 'number': 15} | {'precision': 0.68, 'recall': 0.9532710280373832, 'f1': 0.7937743190661478, 'number': 214} | {'precision': 0.8974358974358975, 'recall': 0.84, 'f1': 0.8677685950413223, 'number': 125} | {'precision': 0.8000628733102798, 'recall': 0.9736036725325172, 'f1': 0.8783433994823123, 'number': 2614} | {'precision': 0.8484848484848485, 'recall': 0.8484848484848485, 'f1': 0.8484848484848486, 'number': 33} | {'precision': 0.821917808219178, 'recall': 0.9090909090909091, 'f1': 0.8633093525179857, 'number': 66} | {'precision': 0.831858407079646, 'recall': 0.8173913043478261, 'f1': 0.8245614035087718, 'number': 115} | {'precision': 0.7, 'recall': 0.8974358974358975, 'f1': 0.7865168539325842, 'number': 39} | 0.7887 | 0.9136 | 0.8466 | 0.8023 |
| 0.0634 | 10.0 | 360 | 0.6939 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 0.3333333333333333, 'recall': 1.0, 'f1': 0.5, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.42857142857142855, 'f1': 0.5217391304347826, 'number': 14} | {'precision': 0.6041666666666666, 'recall': 0.7631578947368421, 'f1': 0.6744186046511628, 'number': 38} | {'precision': 0.8208955223880597, 'recall': 0.9322033898305084, 'f1': 0.873015873015873, 'number': 59} | {'precision': 0.6923076923076923, 'recall': 0.9473684210526315, 'f1': 0.7999999999999999, 'number': 19} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.7938144329896907, 'recall': 0.9166666666666666, 'f1': 0.850828729281768, 'number': 84} | {'precision': 0.6086956521739131, 'recall': 0.6666666666666666, 'f1': 0.6363636363636365, 'number': 21} | {'precision': 0.7142857142857143, 'recall': 0.5980066445182725, 'f1': 0.650994575045208, 'number': 301} | {'precision': 0.7222222222222222, 'recall': 0.8666666666666667, 'f1': 0.7878787878787877, 'number': 15} | {'precision': 0.6538461538461539, 'recall': 0.9532710280373832, 'f1': 0.7756653992395437, 'number': 214} | {'precision': 0.7794117647058824, 'recall': 0.848, 'f1': 0.8122605363984675, 'number': 125} | {'precision': 0.8867452661664881, 'recall': 0.9495026778882938, 'f1': 0.9170515425826714, 'number': 2614} | {'precision': 0.8484848484848485, 'recall': 0.8484848484848485, 'f1': 0.8484848484848486, 'number': 33} | {'precision': 0.7945205479452054, 'recall': 0.8787878787878788, 'f1': 0.8345323741007193, 'number': 66} | {'precision': 0.7723577235772358, 'recall': 0.8260869565217391, 'f1': 0.7983193277310925, 'number': 115} | {'precision': 0.7608695652173914, 'recall': 0.8974358974358975, 'f1': 0.8235294117647058, 'number': 39} | 0.8364 | 0.9046 | 0.8691 | 0.8442 |
| 0.0543 | 11.0 | 396 | 0.7897 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 0.3333333333333333, 'recall': 1.0, 'f1': 0.5, 'number': 1} | {'precision': 0.5714285714285714, 'recall': 0.2857142857142857, 'f1': 0.38095238095238093, 'number': 14} | {'precision': 0.5961538461538461, 'recall': 0.8157894736842105, 'f1': 0.6888888888888889, 'number': 38} | {'precision': 0.875, 'recall': 0.9491525423728814, 'f1': 0.9105691056910569, 'number': 59} | {'precision': 0.68, 'recall': 0.8947368421052632, 'f1': 0.7727272727272727, 'number': 19} | {'precision': 1.0, 'recall': 0.3333333333333333, 'f1': 0.5, 'number': 3} | {'precision': 0.8160919540229885, 'recall': 0.8452380952380952, 'f1': 0.8304093567251463, 'number': 84} | {'precision': 0.75, 'recall': 0.7142857142857143, 'f1': 0.7317073170731706, 'number': 21} | {'precision': 0.7682403433476395, 'recall': 0.5946843853820598, 'f1': 0.6704119850187266, 'number': 301} | {'precision': 0.8571428571428571, 'recall': 0.8, 'f1': 0.8275862068965518, 'number': 15} | {'precision': 0.6804123711340206, 'recall': 0.9252336448598131, 'f1': 0.7841584158415842, 'number': 214} | {'precision': 0.8153846153846154, 'recall': 0.848, 'f1': 0.8313725490196078, 'number': 125} | {'precision': 0.8483730291848373, 'recall': 0.9674827850038256, 'f1': 0.9040214477211796, 'number': 2614} | {'precision': 0.8235294117647058, 'recall': 0.8484848484848485, 'f1': 0.8358208955223881, 'number': 33} | {'precision': 0.8108108108108109, 'recall': 0.9090909090909091, 'f1': 0.8571428571428571, 'number': 66} | {'precision': 0.808695652173913, 'recall': 0.808695652173913, 'f1': 0.808695652173913, 'number': 115} | {'precision': 0.7555555555555555, 'recall': 0.8717948717948718, 'f1': 0.8095238095238095, 'number': 39} | 0.8223 | 0.9136 | 0.8656 | 0.8339 |
| 0.0503 | 12.0 | 432 | 0.8666 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 1} | {'precision': 0.625, 'recall': 0.35714285714285715, 'f1': 0.45454545454545453, 'number': 14} | {'precision': 0.5769230769230769, 'recall': 0.7894736842105263, 'f1': 0.6666666666666666, 'number': 38} | {'precision': 0.7532467532467533, 'recall': 0.9830508474576272, 'f1': 0.8529411764705883, 'number': 59} | {'precision': 0.6296296296296297, 'recall': 0.8947368421052632, 'f1': 0.7391304347826088, 'number': 19} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.8461538461538461, 'recall': 0.7857142857142857, 'f1': 0.8148148148148148, 'number': 84} | {'precision': 0.7272727272727273, 'recall': 0.7619047619047619, 'f1': 0.7441860465116279, 'number': 21} | {'precision': 0.7725321888412017, 'recall': 0.5980066445182725, 'f1': 0.6741573033707865, 'number': 301} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 15} | {'precision': 0.7137681159420289, 'recall': 0.9205607476635514, 'f1': 0.8040816326530612, 'number': 214} | {'precision': 0.6022727272727273, 'recall': 0.848, 'f1': 0.7043189368770764, 'number': 125} | {'precision': 0.8402406417112299, 'recall': 0.9617444529456771, 'f1': 0.8968961826614342, 'number': 2614} | {'precision': 0.8235294117647058, 'recall': 0.8484848484848485, 'f1': 0.8358208955223881, 'number': 33} | {'precision': 0.7972972972972973, 'recall': 0.8939393939393939, 'f1': 0.8428571428571429, 'number': 66} | {'precision': 0.7404580152671756, 'recall': 0.8434782608695652, 'f1': 0.7886178861788617, 'number': 115} | {'precision': 0.7446808510638298, 'recall': 0.8974358974358975, 'f1': 0.813953488372093, 'number': 39} | 0.8059 | 0.9099 | 0.8548 | 0.8207 |
| 0.0396 | 13.0 | 468 | 0.9134 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.6122448979591837, 'recall': 0.7894736842105263, 'f1': 0.6896551724137931, 'number': 38} | {'precision': 0.8833333333333333, 'recall': 0.8983050847457628, 'f1': 0.8907563025210085, 'number': 59} | {'precision': 0.6956521739130435, 'recall': 0.8421052631578947, 'f1': 0.761904761904762, 'number': 19} | {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f1': 0.6666666666666666, 'number': 3} | {'precision': 0.8241758241758241, 'recall': 0.8928571428571429, 'f1': 0.8571428571428571, 'number': 84} | {'precision': 0.7142857142857143, 'recall': 0.7142857142857143, 'f1': 0.7142857142857143, 'number': 21} | {'precision': 0.7584745762711864, 'recall': 0.5946843853820598, 'f1': 0.6666666666666666, 'number': 301} | {'precision': 0.75, 'recall': 0.8, 'f1': 0.7741935483870969, 'number': 15} | {'precision': 0.6813559322033899, 'recall': 0.9392523364485982, 'f1': 0.7897838899803538, 'number': 214} | {'precision': 0.7744360902255639, 'recall': 0.824, 'f1': 0.7984496124031008, 'number': 125} | {'precision': 0.8213824289405685, 'recall': 0.9728385615914308, 'f1': 0.8907180385288966, 'number': 2614} | {'precision': 0.8235294117647058, 'recall': 0.8484848484848485, 'f1': 0.8358208955223881, 'number': 33} | {'precision': 0.8142857142857143, 'recall': 0.8636363636363636, 'f1': 0.8382352941176471, 'number': 66} | {'precision': 0.7983193277310925, 'recall': 0.8260869565217391, 'f1': 0.8119658119658121, 'number': 115} | {'precision': 0.6923076923076923, 'recall': 0.9230769230769231, 'f1': 0.7912087912087913, 'number': 39} | 0.8012 | 0.9176 | 0.8555 | 0.8179 |
| 0.0345 | 14.0 | 504 | 0.8474 | {'precision': 0.65, 'recall': 1.0, 'f1': 0.787878787878788, 'number': 13} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 1} | {'precision': 0.8, 'recall': 0.2857142857142857, 'f1': 0.4210526315789473, 'number': 14} | {'precision': 0.5882352941176471, 'recall': 0.7894736842105263, 'f1': 0.6741573033707866, 'number': 38} | {'precision': 0.8301886792452831, 'recall': 0.7457627118644068, 'f1': 0.7857142857142858, 'number': 59} | {'precision': 0.6956521739130435, 'recall': 0.8421052631578947, 'f1': 0.761904761904762, 'number': 19} | {'precision': 1.0, 'recall': 0.3333333333333333, 'f1': 0.5, 'number': 3} | {'precision': 0.8505747126436781, 'recall': 0.8809523809523809, 'f1': 0.8654970760233917, 'number': 84} | {'precision': 0.7391304347826086, 'recall': 0.8095238095238095, 'f1': 0.7727272727272727, 'number': 21} | {'precision': 0.7705627705627706, 'recall': 0.5913621262458472, 'f1': 0.6691729323308271, 'number': 301} | {'precision': 0.7222222222222222, 'recall': 0.8666666666666667, 'f1': 0.7878787878787877, 'number': 15} | {'precision': 0.6655629139072847, 'recall': 0.9392523364485982, 'f1': 0.7790697674418603, 'number': 214} | {'precision': 0.7686567164179104, 'recall': 0.824, 'f1': 0.7953667953667953, 'number': 125} | {'precision': 0.8516762614290552, 'recall': 0.9621270084162203, 'f1': 0.9035387102568708, 'number': 2614} | {'precision': 0.8235294117647058, 'recall': 0.8484848484848485, 'f1': 0.8358208955223881, 'number': 33} | {'precision': 0.7894736842105263, 'recall': 0.9090909090909091, 'f1': 0.8450704225352113, 'number': 66} | {'precision': 0.7164179104477612, 'recall': 0.8347826086956521, 'f1': 0.7710843373493976, 'number': 115} | {'precision': 0.7446808510638298, 'recall': 0.8974358974358975, 'f1': 0.813953488372093, 'number': 39} | 0.8176 | 0.9086 | 0.8607 | 0.8276 |
| 0.0326 | 15.0 | 540 | 0.9141 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 1} | {'precision': 0.8, 'recall': 0.2857142857142857, 'f1': 0.4210526315789473, 'number': 14} | {'precision': 0.5714285714285714, 'recall': 0.8421052631578947, 'f1': 0.6808510638297872, 'number': 38} | {'precision': 0.8793103448275862, 'recall': 0.864406779661017, 'f1': 0.8717948717948718, 'number': 59} | {'precision': 0.7083333333333334, 'recall': 0.8947368421052632, 'f1': 0.7906976744186046, 'number': 19} | {'precision': 0.5, 'recall': 0.3333333333333333, 'f1': 0.4, 'number': 3} | {'precision': 0.8076923076923077, 'recall': 0.75, 'f1': 0.7777777777777779, 'number': 84} | {'precision': 0.7619047619047619, 'recall': 0.7619047619047619, 'f1': 0.7619047619047619, 'number': 21} | {'precision': 0.7733333333333333, 'recall': 0.5780730897009967, 'f1': 0.6615969581749048, 'number': 301} | {'precision': 0.8125, 'recall': 0.8666666666666667, 'f1': 0.8387096774193549, 'number': 15} | {'precision': 0.6982456140350877, 'recall': 0.9299065420560748, 'f1': 0.7975951903807615, 'number': 214} | {'precision': 0.7463768115942029, 'recall': 0.824, 'f1': 0.7832699619771862, 'number': 125} | {'precision': 0.8302073050345509, 'recall': 0.9651874521805662, 'f1': 0.8926233858128427, 'number': 2614} | {'precision': 0.8484848484848485, 'recall': 0.8484848484848485, 'f1': 0.8484848484848486, 'number': 33} | {'precision': 0.8, 'recall': 0.9090909090909091, 'f1': 0.8510638297872342, 'number': 66} | {'precision': 0.7559055118110236, 'recall': 0.8347826086956521, 'f1': 0.793388429752066, 'number': 115} | {'precision': 0.7446808510638298, 'recall': 0.8974358974358975, 'f1': 0.813953488372093, 'number': 39} | 0.8072 | 0.9086 | 0.8549 | 0.8177 |
| 0.028 | 16.0 | 576 | 0.7912 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.6041666666666666, 'recall': 0.7631578947368421, 'f1': 0.6744186046511628, 'number': 38} | {'precision': 0.8983050847457628, 'recall': 0.8983050847457628, 'f1': 0.8983050847457628, 'number': 59} | {'precision': 0.7391304347826086, 'recall': 0.8947368421052632, 'f1': 0.8095238095238095, 'number': 19} | {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f1': 0.6666666666666666, 'number': 3} | {'precision': 0.8131868131868132, 'recall': 0.8809523809523809, 'f1': 0.8457142857142858, 'number': 84} | {'precision': 0.68, 'recall': 0.8095238095238095, 'f1': 0.7391304347826089, 'number': 21} | {'precision': 0.7355371900826446, 'recall': 0.5913621262458472, 'f1': 0.6556169429097605, 'number': 301} | {'precision': 0.65, 'recall': 0.8666666666666667, 'f1': 0.7428571428571429, 'number': 15} | {'precision': 0.6622516556291391, 'recall': 0.9345794392523364, 'f1': 0.7751937984496126, 'number': 214} | {'precision': 0.6190476190476191, 'recall': 0.832, 'f1': 0.7098976109215018, 'number': 125} | {'precision': 0.8609680741503605, 'recall': 0.9594491201224178, 'f1': 0.9075447801700742, 'number': 2614} | {'precision': 0.875, 'recall': 0.8484848484848485, 'f1': 0.8615384615384615, 'number': 33} | {'precision': 0.7894736842105263, 'recall': 0.9090909090909091, 'f1': 0.8450704225352113, 'number': 66} | {'precision': 0.7246376811594203, 'recall': 0.8695652173913043, 'f1': 0.7905138339920948, 'number': 115} | {'precision': 0.7608695652173914, 'recall': 0.8974358974358975, 'f1': 0.8235294117647058, 'number': 39} | 0.8160 | 0.9104 | 0.8606 | 0.8335 |
| 0.0243 | 17.0 | 612 | 0.7519 | {'precision': 0.8125, 'recall': 1.0, 'f1': 0.896551724137931, 'number': 13} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 1} | {'precision': 0.8, 'recall': 0.2857142857142857, 'f1': 0.4210526315789473, 'number': 14} | {'precision': 0.6382978723404256, 'recall': 0.7894736842105263, 'f1': 0.7058823529411764, 'number': 38} | {'precision': 0.803030303030303, 'recall': 0.8983050847457628, 'f1': 0.8480000000000001, 'number': 59} | {'precision': 0.7083333333333334, 'recall': 0.8947368421052632, 'f1': 0.7906976744186046, 'number': 19} | {'precision': 1.0, 'recall': 0.3333333333333333, 'f1': 0.5, 'number': 3} | {'precision': 0.797752808988764, 'recall': 0.8452380952380952, 'f1': 0.8208092485549132, 'number': 84} | {'precision': 0.7272727272727273, 'recall': 0.7619047619047619, 'f1': 0.7441860465116279, 'number': 21} | {'precision': 0.7802690582959642, 'recall': 0.5780730897009967, 'f1': 0.6641221374045801, 'number': 301} | {'precision': 0.9166666666666666, 'recall': 0.7333333333333333, 'f1': 0.8148148148148148, 'number': 15} | {'precision': 0.6883561643835616, 'recall': 0.9392523364485982, 'f1': 0.7944664031620554, 'number': 214} | {'precision': 0.7862595419847328, 'recall': 0.824, 'f1': 0.8046875, 'number': 125} | {'precision': 0.8832505322924059, 'recall': 0.9521805661820965, 'f1': 0.916421207658321, 'number': 2614} | {'precision': 0.9032258064516129, 'recall': 0.8484848484848485, 'f1': 0.875, 'number': 33} | {'precision': 0.8, 'recall': 0.9090909090909091, 'f1': 0.8510638297872342, 'number': 66} | {'precision': 0.8952380952380953, 'recall': 0.8173913043478261, 'f1': 0.8545454545454546, 'number': 115} | {'precision': 0.825, 'recall': 0.8461538461538461, 'f1': 0.8354430379746836, 'number': 39} | 0.8500 | 0.9006 | 0.8746 | 0.8515 |
| 0.0196 | 18.0 | 648 | 0.9592 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.6382978723404256, 'recall': 0.7894736842105263, 'f1': 0.7058823529411764, 'number': 38} | {'precision': 0.8928571428571429, 'recall': 0.847457627118644, 'f1': 0.8695652173913044, 'number': 59} | {'precision': 0.6818181818181818, 'recall': 0.7894736842105263, 'f1': 0.7317073170731707, 'number': 19} | {'precision': 0.5, 'recall': 0.6666666666666666, 'f1': 0.5714285714285715, 'number': 3} | {'precision': 0.7916666666666666, 'recall': 0.9047619047619048, 'f1': 0.8444444444444444, 'number': 84} | {'precision': 0.6666666666666666, 'recall': 0.7619047619047619, 'f1': 0.7111111111111111, 'number': 21} | {'precision': 0.7639484978540773, 'recall': 0.5913621262458472, 'f1': 0.6666666666666666, 'number': 301} | {'precision': 0.8571428571428571, 'recall': 0.8, 'f1': 0.8275862068965518, 'number': 15} | {'precision': 0.7073170731707317, 'recall': 0.9485981308411215, 'f1': 0.810379241516966, 'number': 214} | {'precision': 0.7114093959731543, 'recall': 0.848, 'f1': 0.7737226277372262, 'number': 125} | {'precision': 0.8324555628703094, 'recall': 0.9674827850038256, 'f1': 0.8949044585987262, 'number': 2614} | {'precision': 0.8235294117647058, 'recall': 0.8484848484848485, 'f1': 0.8358208955223881, 'number': 33} | {'precision': 0.7972972972972973, 'recall': 0.8939393939393939, 'f1': 0.8428571428571429, 'number': 66} | {'precision': 0.6622516556291391, 'recall': 0.8695652173913043, 'f1': 0.7518796992481204, 'number': 115} | {'precision': 0.7446808510638298, 'recall': 0.8974358974358975, 'f1': 0.813953488372093, 'number': 39} | 0.8038 | 0.9160 | 0.8562 | 0.8191 |
| 0.0203 | 19.0 | 684 | 0.8987 | {'precision': 0.8125, 'recall': 1.0, 'f1': 0.896551724137931, 'number': 13} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.5849056603773585, 'recall': 0.8157894736842105, 'f1': 0.6813186813186812, 'number': 38} | {'precision': 0.847457627118644, 'recall': 0.847457627118644, 'f1': 0.847457627118644, 'number': 59} | {'precision': 0.7, 'recall': 0.7368421052631579, 'f1': 0.717948717948718, 'number': 19} | {'precision': 0.5, 'recall': 0.6666666666666666, 'f1': 0.5714285714285715, 'number': 3} | {'precision': 0.8111111111111111, 'recall': 0.8690476190476191, 'f1': 0.8390804597701149, 'number': 84} | {'precision': 0.75, 'recall': 0.7142857142857143, 'f1': 0.7317073170731706, 'number': 21} | {'precision': 0.7531914893617021, 'recall': 0.5880398671096345, 'f1': 0.6604477611940299, 'number': 301} | {'precision': 0.8571428571428571, 'recall': 0.8, 'f1': 0.8275862068965518, 'number': 15} | {'precision': 0.6928327645051194, 'recall': 0.9485981308411215, 'f1': 0.8007889546351086, 'number': 214} | {'precision': 0.7412587412587412, 'recall': 0.848, 'f1': 0.791044776119403, 'number': 125} | {'precision': 0.8358159549817941, 'recall': 0.9659525631216527, 'f1': 0.896184560780834, 'number': 2614} | {'precision': 0.875, 'recall': 0.8484848484848485, 'f1': 0.8615384615384615, 'number': 33} | {'precision': 0.7763157894736842, 'recall': 0.8939393939393939, 'f1': 0.8309859154929577, 'number': 66} | {'precision': 0.8099173553719008, 'recall': 0.8521739130434782, 'f1': 0.8305084745762712, 'number': 115} | {'precision': 0.7727272727272727, 'recall': 0.8717948717948718, 'f1': 0.8192771084337349, 'number': 39} | 0.8108 | 0.9128 | 0.8588 | 0.8225 |
| 0.018 | 20.0 | 720 | 0.8923 | {'precision': 0.8125, 'recall': 1.0, 'f1': 0.896551724137931, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.6595744680851063, 'recall': 0.8157894736842105, 'f1': 0.7294117647058823, 'number': 38} | {'precision': 0.8727272727272727, 'recall': 0.8135593220338984, 'f1': 0.8421052631578948, 'number': 59} | {'precision': 0.6521739130434783, 'recall': 0.7894736842105263, 'f1': 0.7142857142857143, 'number': 19} | {'precision': 0.4, 'recall': 0.6666666666666666, 'f1': 0.5, 'number': 3} | {'precision': 0.8191489361702128, 'recall': 0.9166666666666666, 'f1': 0.8651685393258427, 'number': 84} | {'precision': 0.6956521739130435, 'recall': 0.7619047619047619, 'f1': 0.7272727272727272, 'number': 21} | {'precision': 0.7426160337552743, 'recall': 0.584717607973422, 'f1': 0.654275092936803, 'number': 301} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 15} | {'precision': 0.7153024911032029, 'recall': 0.9392523364485982, 'f1': 0.8121212121212122, 'number': 214} | {'precision': 0.7094594594594594, 'recall': 0.84, 'f1': 0.769230769230769, 'number': 125} | {'precision': 0.8404928404928405, 'recall': 0.9655700076511095, 'f1': 0.8987003738650524, 'number': 2614} | {'precision': 0.875, 'recall': 0.8484848484848485, 'f1': 0.8615384615384615, 'number': 33} | {'precision': 0.7866666666666666, 'recall': 0.8939393939393939, 'f1': 0.8368794326241135, 'number': 66} | {'precision': 0.7716535433070866, 'recall': 0.8521739130434782, 'f1': 0.8099173553719009, 'number': 115} | {'precision': 0.7142857142857143, 'recall': 0.8974358974358975, 'f1': 0.7954545454545455, 'number': 39} | 0.8131 | 0.9128 | 0.8601 | 0.8229 |
| 0.0144 | 21.0 | 756 | 0.8075 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.7142857142857143, 'recall': 0.35714285714285715, 'f1': 0.4761904761904762, 'number': 14} | {'precision': 0.6744186046511628, 'recall': 0.7631578947368421, 'f1': 0.7160493827160495, 'number': 38} | {'precision': 0.8235294117647058, 'recall': 0.711864406779661, 'f1': 0.7636363636363636, 'number': 59} | {'precision': 0.7, 'recall': 0.7368421052631579, 'f1': 0.717948717948718, 'number': 19} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 3} | {'precision': 0.8148148148148148, 'recall': 0.7857142857142857, 'f1': 0.7999999999999999, 'number': 84} | {'precision': 0.5862068965517241, 'recall': 0.8095238095238095, 'f1': 0.68, 'number': 21} | {'precision': 0.6593406593406593, 'recall': 0.5980066445182725, 'f1': 0.6271777003484321, 'number': 301} | {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15} | {'precision': 0.6766666666666666, 'recall': 0.9485981308411215, 'f1': 0.7898832684824902, 'number': 214} | {'precision': 0.7191780821917808, 'recall': 0.84, 'f1': 0.7749077490774908, 'number': 125} | {'precision': 0.8791402396053559, 'recall': 0.9544758990053558, 'f1': 0.9152604548789435, 'number': 2614} | {'precision': 0.875, 'recall': 0.8484848484848485, 'f1': 0.8615384615384615, 'number': 33} | {'precision': 0.759493670886076, 'recall': 0.9090909090909091, 'f1': 0.8275862068965516, 'number': 66} | {'precision': 0.8, 'recall': 0.8695652173913043, 'f1': 0.8333333333333333, 'number': 115} | {'precision': 0.7391304347826086, 'recall': 0.8717948717948718, 'f1': 0.7999999999999999, 'number': 39} | 0.8296 | 0.9028 | 0.8646 | 0.8371 |
| 0.0133 | 22.0 | 792 | 1.0028 | {'precision': 0.8125, 'recall': 1.0, 'f1': 0.896551724137931, 'number': 13} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.6122448979591837, 'recall': 0.7894736842105263, 'f1': 0.6896551724137931, 'number': 38} | {'precision': 0.85, 'recall': 0.864406779661017, 'f1': 0.8571428571428572, 'number': 59} | {'precision': 0.68, 'recall': 0.8947368421052632, 'f1': 0.7727272727272727, 'number': 19} | {'precision': 0.5, 'recall': 0.3333333333333333, 'f1': 0.4, 'number': 3} | {'precision': 0.8, 'recall': 0.9047619047619048, 'f1': 0.8491620111731844, 'number': 84} | {'precision': 0.68, 'recall': 0.8095238095238095, 'f1': 0.7391304347826089, 'number': 21} | {'precision': 0.7521008403361344, 'recall': 0.5946843853820598, 'f1': 0.6641929499072357, 'number': 301} | {'precision': 0.8125, 'recall': 0.8666666666666667, 'f1': 0.8387096774193549, 'number': 15} | {'precision': 0.6833333333333333, 'recall': 0.9579439252336449, 'f1': 0.7976653696498055, 'number': 214} | {'precision': 0.7737226277372263, 'recall': 0.848, 'f1': 0.8091603053435115, 'number': 125} | {'precision': 0.8294854881266491, 'recall': 0.9621270084162203, 'f1': 0.8908962097059866, 'number': 2614} | {'precision': 0.9032258064516129, 'recall': 0.8484848484848485, 'f1': 0.875, 'number': 33} | {'precision': 0.7887323943661971, 'recall': 0.8484848484848485, 'f1': 0.8175182481751825, 'number': 66} | {'precision': 0.7445255474452555, 'recall': 0.8869565217391304, 'f1': 0.8095238095238095, 'number': 115} | {'precision': 0.813953488372093, 'recall': 0.8974358974358975, 'f1': 0.8536585365853658, 'number': 39} | 0.8049 | 0.9139 | 0.8559 | 0.8158 |
| 0.0112 | 23.0 | 828 | 0.8728 | {'precision': 0.8125, 'recall': 1.0, 'f1': 0.896551724137931, 'number': 13} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.62, 'recall': 0.8157894736842105, 'f1': 0.7045454545454546, 'number': 38} | {'precision': 0.7936507936507936, 'recall': 0.847457627118644, 'f1': 0.819672131147541, 'number': 59} | {'precision': 0.7083333333333334, 'recall': 0.8947368421052632, 'f1': 0.7906976744186046, 'number': 19} | {'precision': 1.0, 'recall': 0.3333333333333333, 'f1': 0.5, 'number': 3} | {'precision': 0.8160919540229885, 'recall': 0.8452380952380952, 'f1': 0.8304093567251463, 'number': 84} | {'precision': 0.6956521739130435, 'recall': 0.7619047619047619, 'f1': 0.7272727272727272, 'number': 21} | {'precision': 0.7468354430379747, 'recall': 0.5880398671096345, 'f1': 0.657992565055762, 'number': 301} | {'precision': 0.7857142857142857, 'recall': 0.7333333333333333, 'f1': 0.7586206896551724, 'number': 15} | {'precision': 0.6955017301038062, 'recall': 0.9392523364485982, 'f1': 0.7992047713717694, 'number': 214} | {'precision': 0.7142857142857143, 'recall': 0.84, 'f1': 0.7720588235294118, 'number': 125} | {'precision': 0.8649395509499136, 'recall': 0.9579188982402448, 'f1': 0.9090579052459611, 'number': 2614} | {'precision': 0.8484848484848485, 'recall': 0.8484848484848485, 'f1': 0.8484848484848486, 'number': 33} | {'precision': 0.7887323943661971, 'recall': 0.8484848484848485, 'f1': 0.8175182481751825, 'number': 66} | {'precision': 0.7388059701492538, 'recall': 0.8608695652173913, 'f1': 0.7951807228915663, 'number': 115} | {'precision': 0.7608695652173914, 'recall': 0.8974358974358975, 'f1': 0.8235294117647058, 'number': 39} | 0.8265 | 0.9062 | 0.8645 | 0.8351 |
| 0.0113 | 24.0 | 864 | 1.0214 | {'precision': 0.8125, 'recall': 1.0, 'f1': 0.896551724137931, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.8, 'recall': 0.2857142857142857, 'f1': 0.4210526315789473, 'number': 14} | {'precision': 0.6078431372549019, 'recall': 0.8157894736842105, 'f1': 0.6966292134831461, 'number': 38} | {'precision': 0.7868852459016393, 'recall': 0.8135593220338984, 'f1': 0.8, 'number': 59} | {'precision': 0.6956521739130435, 'recall': 0.8421052631578947, 'f1': 0.761904761904762, 'number': 19} | {'precision': 0.5, 'recall': 0.3333333333333333, 'f1': 0.4, 'number': 3} | {'precision': 0.813953488372093, 'recall': 0.8333333333333334, 'f1': 0.8235294117647058, 'number': 84} | {'precision': 0.6956521739130435, 'recall': 0.7619047619047619, 'f1': 0.7272727272727272, 'number': 21} | {'precision': 0.7510548523206751, 'recall': 0.5913621262458472, 'f1': 0.6617100371747212, 'number': 301} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 15} | {'precision': 0.6952054794520548, 'recall': 0.9485981308411215, 'f1': 0.8023715415019763, 'number': 214} | {'precision': 0.7428571428571429, 'recall': 0.832, 'f1': 0.7849056603773585, 'number': 125} | {'precision': 0.8280632411067194, 'recall': 0.9617444529456771, 'f1': 0.8899115044247786, 'number': 2614} | {'precision': 0.8235294117647058, 'recall': 0.8484848484848485, 'f1': 0.8358208955223881, 'number': 33} | {'precision': 0.7887323943661971, 'recall': 0.8484848484848485, 'f1': 0.8175182481751825, 'number': 66} | {'precision': 0.7557251908396947, 'recall': 0.8608695652173913, 'f1': 0.8048780487804879, 'number': 115} | {'precision': 0.7555555555555555, 'recall': 0.8717948717948718, 'f1': 0.8095238095238095, 'number': 39} | 0.8030 | 0.9083 | 0.8524 | 0.8124 |
| 0.0103 | 25.0 | 900 | 1.1736 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.8, 'recall': 0.2857142857142857, 'f1': 0.4210526315789473, 'number': 14} | {'precision': 0.625, 'recall': 0.7894736842105263, 'f1': 0.6976744186046512, 'number': 38} | {'precision': 0.8035714285714286, 'recall': 0.7627118644067796, 'f1': 0.782608695652174, 'number': 59} | {'precision': 0.6956521739130435, 'recall': 0.8421052631578947, 'f1': 0.761904761904762, 'number': 19} | {'precision': 1.0, 'recall': 0.3333333333333333, 'f1': 0.5, 'number': 3} | {'precision': 0.8131868131868132, 'recall': 0.8809523809523809, 'f1': 0.8457142857142858, 'number': 84} | {'precision': 0.6666666666666666, 'recall': 0.7619047619047619, 'f1': 0.7111111111111111, 'number': 21} | {'precision': 0.7489539748953975, 'recall': 0.5946843853820598, 'f1': 0.662962962962963, 'number': 301} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 15} | {'precision': 0.7073170731707317, 'recall': 0.9485981308411215, 'f1': 0.810379241516966, 'number': 214} | {'precision': 0.6751592356687898, 'recall': 0.848, 'f1': 0.75177304964539, 'number': 125} | {'precision': 0.8172424927349048, 'recall': 0.968247895944912, 'f1': 0.8863596568026616, 'number': 2614} | {'precision': 0.8484848484848485, 'recall': 0.8484848484848485, 'f1': 0.8484848484848486, 'number': 33} | {'precision': 0.7671232876712328, 'recall': 0.8484848484848485, 'f1': 0.8057553956834531, 'number': 66} | {'precision': 0.75, 'recall': 0.8608695652173913, 'f1': 0.8016194331983807, 'number': 115} | {'precision': 0.7, 'recall': 0.8974358974358975, 'f1': 0.7865168539325842, 'number': 39} | 0.7931 | 0.9139 | 0.8492 | 0.8061 |
| 0.0101 | 26.0 | 936 | 0.9218 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.8, 'recall': 0.2857142857142857, 'f1': 0.4210526315789473, 'number': 14} | {'precision': 0.6521739130434783, 'recall': 0.7894736842105263, 'f1': 0.7142857142857143, 'number': 38} | {'precision': 0.78125, 'recall': 0.847457627118644, 'f1': 0.8130081300813008, 'number': 59} | {'precision': 0.6818181818181818, 'recall': 0.7894736842105263, 'f1': 0.7317073170731707, 'number': 19} | {'precision': 0.4, 'recall': 0.6666666666666666, 'f1': 0.5, 'number': 3} | {'precision': 0.8043478260869565, 'recall': 0.8809523809523809, 'f1': 0.8409090909090908, 'number': 84} | {'precision': 0.64, 'recall': 0.7619047619047619, 'f1': 0.6956521739130435, 'number': 21} | {'precision': 0.7447698744769874, 'recall': 0.5913621262458472, 'f1': 0.6592592592592592, 'number': 301} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 15} | {'precision': 0.6962457337883959, 'recall': 0.9532710280373832, 'f1': 0.804733727810651, 'number': 214} | {'precision': 0.7720588235294118, 'recall': 0.84, 'f1': 0.8045977011494253, 'number': 125} | {'precision': 0.8474004051316678, 'recall': 0.9602142310635042, 'f1': 0.900286944045911, 'number': 2614} | {'precision': 0.9032258064516129, 'recall': 0.8484848484848485, 'f1': 0.875, 'number': 33} | {'precision': 0.7972972972972973, 'recall': 0.8939393939393939, 'f1': 0.8428571428571429, 'number': 66} | {'precision': 0.8135593220338984, 'recall': 0.8347826086956521, 'f1': 0.8240343347639484, 'number': 115} | {'precision': 0.7291666666666666, 'recall': 0.8974358974358975, 'f1': 0.8045977011494253, 'number': 39} | 0.8185 | 0.9094 | 0.8616 | 0.8280 |
| 0.0083 | 27.0 | 972 | 1.0161 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.8, 'recall': 0.2857142857142857, 'f1': 0.4210526315789473, 'number': 14} | {'precision': 0.625, 'recall': 0.7894736842105263, 'f1': 0.6976744186046512, 'number': 38} | {'precision': 0.8245614035087719, 'recall': 0.7966101694915254, 'f1': 0.8103448275862069, 'number': 59} | {'precision': 0.7142857142857143, 'recall': 0.7894736842105263, 'f1': 0.7500000000000001, 'number': 19} | {'precision': 0.5, 'recall': 0.6666666666666666, 'f1': 0.5714285714285715, 'number': 3} | {'precision': 0.8131868131868132, 'recall': 0.8809523809523809, 'f1': 0.8457142857142858, 'number': 84} | {'precision': 0.6956521739130435, 'recall': 0.7619047619047619, 'f1': 0.7272727272727272, 'number': 21} | {'precision': 0.7478991596638656, 'recall': 0.5913621262458472, 'f1': 0.660482374768089, 'number': 301} | {'precision': 0.8125, 'recall': 0.8666666666666667, 'f1': 0.8387096774193549, 'number': 15} | {'precision': 0.6798679867986799, 'recall': 0.9626168224299065, 'f1': 0.7969052224371374, 'number': 214} | {'precision': 0.726027397260274, 'recall': 0.848, 'f1': 0.7822878228782287, 'number': 125} | {'precision': 0.8484438430311232, 'recall': 0.9594491201224178, 'f1': 0.9005385996409335, 'number': 2614} | {'precision': 0.8235294117647058, 'recall': 0.8484848484848485, 'f1': 0.8358208955223881, 'number': 33} | {'precision': 0.7887323943661971, 'recall': 0.8484848484848485, 'f1': 0.8175182481751825, 'number': 66} | {'precision': 0.7575757575757576, 'recall': 0.8695652173913043, 'f1': 0.8097165991902834, 'number': 115} | {'precision': 0.7608695652173914, 'recall': 0.8974358974358975, 'f1': 0.8235294117647058, 'number': 39} | 0.8154 | 0.9094 | 0.8598 | 0.8246 |
| 0.0084 | 28.0 | 1008 | 1.0284 | {'precision': 0.7222222222222222, 'recall': 1.0, 'f1': 0.8387096774193548, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.625, 'recall': 0.35714285714285715, 'f1': 0.45454545454545453, 'number': 14} | {'precision': 0.5882352941176471, 'recall': 0.7894736842105263, 'f1': 0.6741573033707866, 'number': 38} | {'precision': 0.8245614035087719, 'recall': 0.7966101694915254, 'f1': 0.8103448275862069, 'number': 59} | {'precision': 0.7083333333333334, 'recall': 0.8947368421052632, 'f1': 0.7906976744186046, 'number': 19} | {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f1': 0.6666666666666666, 'number': 3} | {'precision': 0.8354430379746836, 'recall': 0.7857142857142857, 'f1': 0.8098159509202455, 'number': 84} | {'precision': 0.68, 'recall': 0.8095238095238095, 'f1': 0.7391304347826089, 'number': 21} | {'precision': 0.7276422764227642, 'recall': 0.5946843853820598, 'f1': 0.6544789762340036, 'number': 301} | {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15} | {'precision': 0.6835016835016835, 'recall': 0.9485981308411215, 'f1': 0.7945205479452055, 'number': 214} | {'precision': 0.7851851851851852, 'recall': 0.848, 'f1': 0.8153846153846154, 'number': 125} | {'precision': 0.84029401937855, 'recall': 0.9621270084162203, 'f1': 0.8970929195648296, 'number': 2614} | {'precision': 0.875, 'recall': 0.8484848484848485, 'f1': 0.8615384615384615, 'number': 33} | {'precision': 0.8, 'recall': 0.9090909090909091, 'f1': 0.8510638297872342, 'number': 66} | {'precision': 0.8211382113821138, 'recall': 0.8782608695652174, 'f1': 0.8487394957983194, 'number': 115} | {'precision': 0.7608695652173914, 'recall': 0.8974358974358975, 'f1': 0.8235294117647058, 'number': 39} | 0.8128 | 0.9110 | 0.8591 | 0.8217 |
| 0.0068 | 29.0 | 1044 | 1.0004 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 1} | {'precision': 0.5714285714285714, 'recall': 0.2857142857142857, 'f1': 0.38095238095238093, 'number': 14} | {'precision': 0.625, 'recall': 0.7894736842105263, 'f1': 0.6976744186046512, 'number': 38} | {'precision': 0.8135593220338984, 'recall': 0.8135593220338984, 'f1': 0.8135593220338985, 'number': 59} | {'precision': 0.6666666666666666, 'recall': 0.8421052631578947, 'f1': 0.744186046511628, 'number': 19} | {'precision': 1.0, 'recall': 0.3333333333333333, 'f1': 0.5, 'number': 3} | {'precision': 0.8372093023255814, 'recall': 0.8571428571428571, 'f1': 0.8470588235294119, 'number': 84} | {'precision': 0.68, 'recall': 0.8095238095238095, 'f1': 0.7391304347826089, 'number': 21} | {'precision': 0.7489539748953975, 'recall': 0.5946843853820598, 'f1': 0.662962962962963, 'number': 301} | {'precision': 0.7222222222222222, 'recall': 0.8666666666666667, 'f1': 0.7878787878787877, 'number': 15} | {'precision': 0.689419795221843, 'recall': 0.9439252336448598, 'f1': 0.7968441814595661, 'number': 214} | {'precision': 0.75177304964539, 'recall': 0.848, 'f1': 0.7969924812030075, 'number': 125} | {'precision': 0.8466734211415062, 'recall': 0.9590665646518746, 'f1': 0.8993721973094171, 'number': 2614} | {'precision': 0.875, 'recall': 0.8484848484848485, 'f1': 0.8615384615384615, 'number': 33} | {'precision': 0.7887323943661971, 'recall': 0.8484848484848485, 'f1': 0.8175182481751825, 'number': 66} | {'precision': 0.8048780487804879, 'recall': 0.8608695652173913, 'f1': 0.8319327731092437, 'number': 115} | {'precision': 0.717391304347826, 'recall': 0.8461538461538461, 'f1': 0.776470588235294, 'number': 39} | 0.8168 | 0.9075 | 0.8598 | 0.8250 |
| 0.0062 | 30.0 | 1080 | 0.9365 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.6666666666666666, 'recall': 0.8421052631578947, 'f1': 0.744186046511628, 'number': 38} | {'precision': 0.8032786885245902, 'recall': 0.8305084745762712, 'f1': 0.8166666666666667, 'number': 59} | {'precision': 0.65, 'recall': 0.6842105263157895, 'f1': 0.6666666666666667, 'number': 19} | {'precision': 0.25, 'recall': 0.3333333333333333, 'f1': 0.28571428571428575, 'number': 3} | {'precision': 0.8295454545454546, 'recall': 0.8690476190476191, 'f1': 0.8488372093023256, 'number': 84} | {'precision': 0.7391304347826086, 'recall': 0.8095238095238095, 'f1': 0.7727272727272727, 'number': 21} | {'precision': 0.7416666666666667, 'recall': 0.5913621262458472, 'f1': 0.6580406654343807, 'number': 301} | {'precision': 0.8666666666666667, 'recall': 0.8666666666666667, 'f1': 0.8666666666666667, 'number': 15} | {'precision': 0.7038327526132404, 'recall': 0.9439252336448598, 'f1': 0.8063872255489022, 'number': 214} | {'precision': 0.762589928057554, 'recall': 0.848, 'f1': 0.8030303030303031, 'number': 125} | {'precision': 0.8570451436388509, 'recall': 0.9586840091813313, 'f1': 0.9050198627663417, 'number': 2614} | {'precision': 0.875, 'recall': 0.8484848484848485, 'f1': 0.8615384615384615, 'number': 33} | {'precision': 0.7972972972972973, 'recall': 0.8939393939393939, 'f1': 0.8428571428571429, 'number': 66} | {'precision': 0.8305084745762712, 'recall': 0.8521739130434782, 'f1': 0.8412017167381974, 'number': 115} | {'precision': 0.7727272727272727, 'recall': 0.8717948717948718, 'f1': 0.8192771084337349, 'number': 39} | 0.8276 | 0.9081 | 0.8660 | 0.8345 |
| 0.0067 | 31.0 | 1116 | 1.0133 | {'precision': 0.8125, 'recall': 1.0, 'f1': 0.896551724137931, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.64, 'recall': 0.8421052631578947, 'f1': 0.7272727272727272, 'number': 38} | {'precision': 0.8245614035087719, 'recall': 0.7966101694915254, 'f1': 0.8103448275862069, 'number': 59} | {'precision': 0.6666666666666666, 'recall': 0.7368421052631579, 'f1': 0.7, 'number': 19} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3} | {'precision': 0.8131868131868132, 'recall': 0.8809523809523809, 'f1': 0.8457142857142858, 'number': 84} | {'precision': 0.6521739130434783, 'recall': 0.7142857142857143, 'f1': 0.6818181818181819, 'number': 21} | {'precision': 0.7531914893617021, 'recall': 0.5880398671096345, 'f1': 0.6604477611940299, 'number': 301} | {'precision': 0.8125, 'recall': 0.8666666666666667, 'f1': 0.8387096774193549, 'number': 15} | {'precision': 0.7013888888888888, 'recall': 0.9439252336448598, 'f1': 0.8047808764940239, 'number': 214} | {'precision': 0.7114093959731543, 'recall': 0.848, 'f1': 0.7737226277372262, 'number': 125} | {'precision': 0.8465966813410092, 'recall': 0.9563886763580719, 'f1': 0.8981498113885396, 'number': 2614} | {'precision': 0.8484848484848485, 'recall': 0.8484848484848485, 'f1': 0.8484848484848486, 'number': 33} | {'precision': 0.7808219178082192, 'recall': 0.8636363636363636, 'f1': 0.8201438848920863, 'number': 66} | {'precision': 0.7575757575757576, 'recall': 0.8695652173913043, 'f1': 0.8097165991902834, 'number': 115} | {'precision': 0.7291666666666666, 'recall': 0.8974358974358975, 'f1': 0.8045977011494253, 'number': 39} | 0.8150 | 0.9059 | 0.8581 | 0.8237 |
| 0.0057 | 32.0 | 1152 | 1.0674 | {'precision': 0.8125, 'recall': 1.0, 'f1': 0.896551724137931, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.7272727272727273, 'recall': 0.8421052631578947, 'f1': 0.7804878048780488, 'number': 38} | {'precision': 0.8181818181818182, 'recall': 0.7627118644067796, 'f1': 0.7894736842105264, 'number': 59} | {'precision': 0.6363636363636364, 'recall': 0.7368421052631579, 'f1': 0.6829268292682926, 'number': 19} | {'precision': 0.25, 'recall': 0.3333333333333333, 'f1': 0.28571428571428575, 'number': 3} | {'precision': 0.8352941176470589, 'recall': 0.8452380952380952, 'f1': 0.8402366863905326, 'number': 84} | {'precision': 0.6956521739130435, 'recall': 0.7619047619047619, 'f1': 0.7272727272727272, 'number': 21} | {'precision': 0.7574468085106383, 'recall': 0.5913621262458472, 'f1': 0.6641791044776119, 'number': 301} | {'precision': 0.8125, 'recall': 0.8666666666666667, 'f1': 0.8387096774193549, 'number': 15} | {'precision': 0.7024221453287197, 'recall': 0.9485981308411215, 'f1': 0.8071570576540756, 'number': 214} | {'precision': 0.813953488372093, 'recall': 0.84, 'f1': 0.8267716535433071, 'number': 125} | {'precision': 0.8357214261912695, 'recall': 0.9594491201224178, 'f1': 0.8933214603739983, 'number': 2614} | {'precision': 0.8484848484848485, 'recall': 0.8484848484848485, 'f1': 0.8484848484848486, 'number': 33} | {'precision': 0.8, 'recall': 0.9090909090909091, 'f1': 0.8510638297872342, 'number': 66} | {'precision': 0.8828828828828829, 'recall': 0.8521739130434782, 'f1': 0.8672566371681416, 'number': 115} | {'precision': 0.7555555555555555, 'recall': 0.8717948717948718, 'f1': 0.8095238095238095, 'number': 39} | 0.8172 | 0.9073 | 0.8599 | 0.8231 |
| 0.0054 | 33.0 | 1188 | 1.0835 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.7021276595744681, 'recall': 0.868421052631579, 'f1': 0.7764705882352942, 'number': 38} | {'precision': 0.8333333333333334, 'recall': 0.7627118644067796, 'f1': 0.7964601769911505, 'number': 59} | {'precision': 0.6818181818181818, 'recall': 0.7894736842105263, 'f1': 0.7317073170731707, 'number': 19} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3} | {'precision': 0.8222222222222222, 'recall': 0.8809523809523809, 'f1': 0.8505747126436781, 'number': 84} | {'precision': 0.7391304347826086, 'recall': 0.8095238095238095, 'f1': 0.7727272727272727, 'number': 21} | {'precision': 0.7672413793103449, 'recall': 0.5913621262458472, 'f1': 0.6679174484052532, 'number': 301} | {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15} | {'precision': 0.6952054794520548, 'recall': 0.9485981308411215, 'f1': 0.8023715415019763, 'number': 214} | {'precision': 0.7412587412587412, 'recall': 0.848, 'f1': 0.791044776119403, 'number': 125} | {'precision': 0.8363333333333334, 'recall': 0.959831675592961, 'f1': 0.893836836480228, 'number': 2614} | {'precision': 0.8, 'recall': 0.8484848484848485, 'f1': 0.823529411764706, 'number': 33} | {'precision': 0.7916666666666666, 'recall': 0.8636363636363636, 'f1': 0.8260869565217391, 'number': 66} | {'precision': 0.7795275590551181, 'recall': 0.8608695652173913, 'f1': 0.8181818181818182, 'number': 115} | {'precision': 0.7954545454545454, 'recall': 0.8974358974358975, 'f1': 0.8433734939759037, 'number': 39} | 0.8121 | 0.9091 | 0.8579 | 0.8203 |
| 0.0054 | 34.0 | 1224 | 1.1475 | {'precision': 0.8125, 'recall': 1.0, 'f1': 0.896551724137931, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.6666666666666666, 'recall': 0.8421052631578947, 'f1': 0.744186046511628, 'number': 38} | {'precision': 0.7796610169491526, 'recall': 0.7796610169491526, 'f1': 0.7796610169491526, 'number': 59} | {'precision': 0.6666666666666666, 'recall': 0.8421052631578947, 'f1': 0.744186046511628, 'number': 19} | {'precision': 0.5, 'recall': 0.3333333333333333, 'f1': 0.4, 'number': 3} | {'precision': 0.8181818181818182, 'recall': 0.8571428571428571, 'f1': 0.8372093023255814, 'number': 84} | {'precision': 0.7083333333333334, 'recall': 0.8095238095238095, 'f1': 0.7555555555555556, 'number': 21} | {'precision': 0.7489539748953975, 'recall': 0.5946843853820598, 'f1': 0.662962962962963, 'number': 301} | {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15} | {'precision': 0.6986301369863014, 'recall': 0.9532710280373832, 'f1': 0.8063241106719368, 'number': 214} | {'precision': 0.7310344827586207, 'recall': 0.848, 'f1': 0.7851851851851852, 'number': 125} | {'precision': 0.8288258575197889, 'recall': 0.9613618974751339, 'f1': 0.8901877435352462, 'number': 2614} | {'precision': 0.8, 'recall': 0.8484848484848485, 'f1': 0.823529411764706, 'number': 33} | {'precision': 0.7916666666666666, 'recall': 0.8636363636363636, 'f1': 0.8260869565217391, 'number': 66} | {'precision': 0.7829457364341085, 'recall': 0.8782608695652174, 'f1': 0.8278688524590164, 'number': 115} | {'precision': 0.75, 'recall': 0.8461538461538461, 'f1': 0.7951807228915662, 'number': 39} | 0.8041 | 0.9104 | 0.8540 | 0.8140 |
| 0.0053 | 35.0 | 1260 | 1.1259 | {'precision': 0.8125, 'recall': 1.0, 'f1': 0.896551724137931, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.6666666666666666, 'recall': 0.8421052631578947, 'f1': 0.744186046511628, 'number': 38} | {'precision': 0.8070175438596491, 'recall': 0.7796610169491526, 'f1': 0.7931034482758621, 'number': 59} | {'precision': 0.7142857142857143, 'recall': 0.7894736842105263, 'f1': 0.7500000000000001, 'number': 19} | {'precision': 0.5, 'recall': 0.6666666666666666, 'f1': 0.5714285714285715, 'number': 3} | {'precision': 0.8414634146341463, 'recall': 0.8214285714285714, 'f1': 0.8313253012048192, 'number': 84} | {'precision': 0.7083333333333334, 'recall': 0.8095238095238095, 'f1': 0.7555555555555556, 'number': 21} | {'precision': 0.7542372881355932, 'recall': 0.5913621262458472, 'f1': 0.6629422718808194, 'number': 301} | {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15} | {'precision': 0.7132867132867133, 'recall': 0.9532710280373832, 'f1': 0.8160000000000001, 'number': 214} | {'precision': 0.6973684210526315, 'recall': 0.848, 'f1': 0.7653429602888087, 'number': 125} | {'precision': 0.8338318378198737, 'recall': 0.959831675592961, 'f1': 0.8924061888671527, 'number': 2614} | {'precision': 0.8484848484848485, 'recall': 0.8484848484848485, 'f1': 0.8484848484848486, 'number': 33} | {'precision': 0.7916666666666666, 'recall': 0.8636363636363636, 'f1': 0.8260869565217391, 'number': 66} | {'precision': 0.7481481481481481, 'recall': 0.8782608695652174, 'f1': 0.8079999999999999, 'number': 115} | {'precision': 0.7777777777777778, 'recall': 0.8974358974358975, 'f1': 0.8333333333333333, 'number': 39} | 0.8082 | 0.9089 | 0.8556 | 0.8177 |
| 0.0045 | 36.0 | 1296 | 1.0876 | {'precision': 0.8125, 'recall': 1.0, 'f1': 0.896551724137931, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.6666666666666666, 'recall': 0.8421052631578947, 'f1': 0.744186046511628, 'number': 38} | {'precision': 0.7818181818181819, 'recall': 0.7288135593220338, 'f1': 0.7543859649122807, 'number': 59} | {'precision': 0.7142857142857143, 'recall': 0.7894736842105263, 'f1': 0.7500000000000001, 'number': 19} | {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f1': 0.6666666666666666, 'number': 3} | {'precision': 0.8493150684931506, 'recall': 0.7380952380952381, 'f1': 0.7898089171974523, 'number': 84} | {'precision': 0.7083333333333334, 'recall': 0.8095238095238095, 'f1': 0.7555555555555556, 'number': 21} | {'precision': 0.7385892116182573, 'recall': 0.5913621262458472, 'f1': 0.6568265682656825, 'number': 301} | {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15} | {'precision': 0.688135593220339, 'recall': 0.9485981308411215, 'f1': 0.7976424361493124, 'number': 214} | {'precision': 0.7536231884057971, 'recall': 0.832, 'f1': 0.7908745247148289, 'number': 125} | {'precision': 0.8393695506371562, 'recall': 0.9575363427697016, 'f1': 0.8945675482487491, 'number': 2614} | {'precision': 0.8484848484848485, 'recall': 0.8484848484848485, 'f1': 0.8484848484848486, 'number': 33} | {'precision': 0.75, 'recall': 0.9090909090909091, 'f1': 0.821917808219178, 'number': 66} | {'precision': 0.819672131147541, 'recall': 0.8695652173913043, 'f1': 0.8438818565400844, 'number': 115} | {'precision': 0.72, 'recall': 0.9230769230769231, 'f1': 0.8089887640449438, 'number': 39} | 0.8119 | 0.9046 | 0.8557 | 0.8175 |
| 0.0052 | 37.0 | 1332 | 1.0954 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.625, 'recall': 0.7894736842105263, 'f1': 0.6976744186046512, 'number': 38} | {'precision': 0.7540983606557377, 'recall': 0.7796610169491526, 'f1': 0.7666666666666666, 'number': 59} | {'precision': 0.6956521739130435, 'recall': 0.8421052631578947, 'f1': 0.761904761904762, 'number': 19} | {'precision': 0.5, 'recall': 0.3333333333333333, 'f1': 0.4, 'number': 3} | {'precision': 0.8352941176470589, 'recall': 0.8452380952380952, 'f1': 0.8402366863905326, 'number': 84} | {'precision': 0.7083333333333334, 'recall': 0.8095238095238095, 'f1': 0.7555555555555556, 'number': 21} | {'precision': 0.7542372881355932, 'recall': 0.5913621262458472, 'f1': 0.6629422718808194, 'number': 301} | {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15} | {'precision': 0.6952054794520548, 'recall': 0.9485981308411215, 'f1': 0.8023715415019763, 'number': 214} | {'precision': 0.7, 'recall': 0.84, 'f1': 0.7636363636363636, 'number': 125} | {'precision': 0.8398123324396782, 'recall': 0.9586840091813313, 'f1': 0.8953197570560915, 'number': 2614} | {'precision': 0.8484848484848485, 'recall': 0.8484848484848485, 'f1': 0.8484848484848486, 'number': 33} | {'precision': 0.7887323943661971, 'recall': 0.8484848484848485, 'f1': 0.8175182481751825, 'number': 66} | {'precision': 0.7338129496402878, 'recall': 0.8869565217391304, 'f1': 0.8031496062992125, 'number': 115} | {'precision': 0.7608695652173914, 'recall': 0.8974358974358975, 'f1': 0.8235294117647058, 'number': 39} | 0.8087 | 0.9075 | 0.8553 | 0.8185 |
| 0.0045 | 38.0 | 1368 | 1.0802 | {'precision': 0.8125, 'recall': 1.0, 'f1': 0.896551724137931, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.6458333333333334, 'recall': 0.8157894736842105, 'f1': 0.7209302325581395, 'number': 38} | {'precision': 0.7796610169491526, 'recall': 0.7796610169491526, 'f1': 0.7796610169491526, 'number': 59} | {'precision': 0.6956521739130435, 'recall': 0.8421052631578947, 'f1': 0.761904761904762, 'number': 19} | {'precision': 0.5, 'recall': 0.3333333333333333, 'f1': 0.4, 'number': 3} | {'precision': 0.8292682926829268, 'recall': 0.8095238095238095, 'f1': 0.8192771084337348, 'number': 84} | {'precision': 0.6956521739130435, 'recall': 0.7619047619047619, 'f1': 0.7272727272727272, 'number': 21} | {'precision': 0.7478991596638656, 'recall': 0.5913621262458472, 'f1': 0.660482374768089, 'number': 301} | {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15} | {'precision': 0.688135593220339, 'recall': 0.9485981308411215, 'f1': 0.7976424361493124, 'number': 214} | {'precision': 0.7342657342657343, 'recall': 0.84, 'f1': 0.7835820895522388, 'number': 125} | {'precision': 0.8412911903160726, 'recall': 0.9571537872991583, 'f1': 0.8954903364352182, 'number': 2614} | {'precision': 0.875, 'recall': 0.8484848484848485, 'f1': 0.8615384615384615, 'number': 33} | {'precision': 0.7887323943661971, 'recall': 0.8484848484848485, 'f1': 0.8175182481751825, 'number': 66} | {'precision': 0.7829457364341085, 'recall': 0.8782608695652174, 'f1': 0.8278688524590164, 'number': 115} | {'precision': 0.7291666666666666, 'recall': 0.8974358974358975, 'f1': 0.8045977011494253, 'number': 39} | 0.8122 | 0.9054 | 0.8563 | 0.8197 |
| 0.004 | 39.0 | 1404 | 1.1050 | {'precision': 0.8125, 'recall': 1.0, 'f1': 0.896551724137931, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.6382978723404256, 'recall': 0.7894736842105263, 'f1': 0.7058823529411764, 'number': 38} | {'precision': 0.8103448275862069, 'recall': 0.7966101694915254, 'f1': 0.8034188034188032, 'number': 59} | {'precision': 0.6666666666666666, 'recall': 0.8421052631578947, 'f1': 0.744186046511628, 'number': 19} | {'precision': 0.5, 'recall': 0.3333333333333333, 'f1': 0.4, 'number': 3} | {'precision': 0.8202247191011236, 'recall': 0.8690476190476191, 'f1': 0.8439306358381503, 'number': 84} | {'precision': 0.7083333333333334, 'recall': 0.8095238095238095, 'f1': 0.7555555555555556, 'number': 21} | {'precision': 0.7478991596638656, 'recall': 0.5913621262458472, 'f1': 0.660482374768089, 'number': 301} | {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15} | {'precision': 0.697594501718213, 'recall': 0.9485981308411215, 'f1': 0.8039603960396039, 'number': 214} | {'precision': 0.7464788732394366, 'recall': 0.848, 'f1': 0.7940074906367041, 'number': 125} | {'precision': 0.8362846642165052, 'recall': 0.9575363427697016, 'f1': 0.8928125557339041, 'number': 2614} | {'precision': 0.9032258064516129, 'recall': 0.8484848484848485, 'f1': 0.875, 'number': 33} | {'precision': 0.8, 'recall': 0.9090909090909091, 'f1': 0.8510638297872342, 'number': 66} | {'precision': 0.7593984962406015, 'recall': 0.8782608695652174, 'f1': 0.8145161290322581, 'number': 115} | {'precision': 0.7291666666666666, 'recall': 0.8974358974358975, 'f1': 0.8045977011494253, 'number': 39} | 0.8097 | 0.9086 | 0.8563 | 0.8183 |
| 0.0039 | 40.0 | 1440 | 1.1108 | {'precision': 0.8125, 'recall': 1.0, 'f1': 0.896551724137931, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.6382978723404256, 'recall': 0.7894736842105263, 'f1': 0.7058823529411764, 'number': 38} | {'precision': 0.75, 'recall': 0.8135593220338984, 'f1': 0.7804878048780488, 'number': 59} | {'precision': 0.64, 'recall': 0.8421052631578947, 'f1': 0.7272727272727272, 'number': 19} | {'precision': 0.5, 'recall': 0.3333333333333333, 'f1': 0.4, 'number': 3} | {'precision': 0.813953488372093, 'recall': 0.8333333333333334, 'f1': 0.8235294117647058, 'number': 84} | {'precision': 0.6956521739130435, 'recall': 0.7619047619047619, 'f1': 0.7272727272727272, 'number': 21} | {'precision': 0.7510548523206751, 'recall': 0.5913621262458472, 'f1': 0.6617100371747212, 'number': 301} | {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15} | {'precision': 0.6972789115646258, 'recall': 0.9579439252336449, 'f1': 0.8070866141732282, 'number': 214} | {'precision': 0.7681159420289855, 'recall': 0.848, 'f1': 0.8060836501901141, 'number': 125} | {'precision': 0.8356118706235411, 'recall': 0.9586840091813313, 'f1': 0.892927133440228, 'number': 2614} | {'precision': 0.875, 'recall': 0.8484848484848485, 'f1': 0.8615384615384615, 'number': 33} | {'precision': 0.8, 'recall': 0.9090909090909091, 'f1': 0.8510638297872342, 'number': 66} | {'precision': 0.7709923664122137, 'recall': 0.8782608695652174, 'f1': 0.8211382113821138, 'number': 115} | {'precision': 0.723404255319149, 'recall': 0.8717948717948718, 'f1': 0.7906976744186047, 'number': 39} | 0.8090 | 0.9089 | 0.8560 | 0.8171 |
| 0.0036 | 41.0 | 1476 | 1.0393 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.6382978723404256, 'recall': 0.7894736842105263, 'f1': 0.7058823529411764, 'number': 38} | {'precision': 0.7619047619047619, 'recall': 0.8135593220338984, 'f1': 0.7868852459016393, 'number': 59} | {'precision': 0.6666666666666666, 'recall': 0.8421052631578947, 'f1': 0.744186046511628, 'number': 19} | {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f1': 0.6666666666666666, 'number': 3} | {'precision': 0.8222222222222222, 'recall': 0.8809523809523809, 'f1': 0.8505747126436781, 'number': 84} | {'precision': 0.6956521739130435, 'recall': 0.7619047619047619, 'f1': 0.7272727272727272, 'number': 21} | {'precision': 0.7478991596638656, 'recall': 0.5913621262458472, 'f1': 0.660482374768089, 'number': 301} | {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15} | {'precision': 0.68, 'recall': 0.9532710280373832, 'f1': 0.7937743190661478, 'number': 214} | {'precision': 0.762589928057554, 'recall': 0.848, 'f1': 0.8030303030303031, 'number': 125} | {'precision': 0.8457374830852503, 'recall': 0.9563886763580719, 'f1': 0.8976660682226212, 'number': 2614} | {'precision': 0.9032258064516129, 'recall': 0.8484848484848485, 'f1': 0.875, 'number': 33} | {'precision': 0.7972972972972973, 'recall': 0.8939393939393939, 'f1': 0.8428571428571429, 'number': 66} | {'precision': 0.7829457364341085, 'recall': 0.8782608695652174, 'f1': 0.8278688524590164, 'number': 115} | {'precision': 0.7391304347826086, 'recall': 0.8717948717948718, 'f1': 0.7999999999999999, 'number': 39} | 0.8152 | 0.9081 | 0.8591 | 0.8225 |
| 0.0036 | 42.0 | 1512 | 1.0768 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.625, 'recall': 0.7894736842105263, 'f1': 0.6976744186046512, 'number': 38} | {'precision': 0.8245614035087719, 'recall': 0.7966101694915254, 'f1': 0.8103448275862069, 'number': 59} | {'precision': 0.7272727272727273, 'recall': 0.8421052631578947, 'f1': 0.7804878048780488, 'number': 19} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 3} | {'precision': 0.8295454545454546, 'recall': 0.8690476190476191, 'f1': 0.8488372093023256, 'number': 84} | {'precision': 0.6956521739130435, 'recall': 0.7619047619047619, 'f1': 0.7272727272727272, 'number': 21} | {'precision': 0.7574468085106383, 'recall': 0.5913621262458472, 'f1': 0.6641791044776119, 'number': 301} | {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15} | {'precision': 0.6788079470198676, 'recall': 0.9579439252336449, 'f1': 0.7945736434108528, 'number': 214} | {'precision': 0.7681159420289855, 'recall': 0.848, 'f1': 0.8060836501901141, 'number': 125} | {'precision': 0.8413445378151261, 'recall': 0.9575363427697016, 'f1': 0.8956879584898908, 'number': 2614} | {'precision': 0.875, 'recall': 0.8484848484848485, 'f1': 0.8615384615384615, 'number': 33} | {'precision': 0.8, 'recall': 0.9090909090909091, 'f1': 0.8510638297872342, 'number': 66} | {'precision': 0.7709923664122137, 'recall': 0.8782608695652174, 'f1': 0.8211382113821138, 'number': 115} | {'precision': 0.7142857142857143, 'recall': 0.8974358974358975, 'f1': 0.7954545454545455, 'number': 39} | 0.8134 | 0.9091 | 0.8586 | 0.8211 |
| 0.0029 | 43.0 | 1548 | 1.1414 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.625, 'recall': 0.7894736842105263, 'f1': 0.6976744186046512, 'number': 38} | {'precision': 0.8518518518518519, 'recall': 0.7796610169491526, 'f1': 0.8141592920353983, 'number': 59} | {'precision': 0.7272727272727273, 'recall': 0.8421052631578947, 'f1': 0.7804878048780488, 'number': 19} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 3} | {'precision': 0.8275862068965517, 'recall': 0.8571428571428571, 'f1': 0.8421052631578947, 'number': 84} | {'precision': 0.7083333333333334, 'recall': 0.8095238095238095, 'f1': 0.7555555555555556, 'number': 21} | {'precision': 0.7385892116182573, 'recall': 0.5913621262458472, 'f1': 0.6568265682656825, 'number': 301} | {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15} | {'precision': 0.6902356902356902, 'recall': 0.9579439252336449, 'f1': 0.802348336594912, 'number': 214} | {'precision': 0.762589928057554, 'recall': 0.848, 'f1': 0.8030303030303031, 'number': 125} | {'precision': 0.8350549816727757, 'recall': 0.9586840091813313, 'f1': 0.8926090828138914, 'number': 2614} | {'precision': 0.875, 'recall': 0.8484848484848485, 'f1': 0.8615384615384615, 'number': 33} | {'precision': 0.8, 'recall': 0.9090909090909091, 'f1': 0.8510638297872342, 'number': 66} | {'precision': 0.7611940298507462, 'recall': 0.8869565217391304, 'f1': 0.8192771084337349, 'number': 115} | {'precision': 0.7142857142857143, 'recall': 0.8974358974358975, 'f1': 0.7954545454545455, 'number': 39} | 0.8088 | 0.9099 | 0.8564 | 0.8175 |
| 0.0038 | 44.0 | 1584 | 1.1545 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.625, 'recall': 0.7894736842105263, 'f1': 0.6976744186046512, 'number': 38} | {'precision': 0.8363636363636363, 'recall': 0.7796610169491526, 'f1': 0.8070175438596492, 'number': 59} | {'precision': 0.7272727272727273, 'recall': 0.8421052631578947, 'f1': 0.7804878048780488, 'number': 19} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 3} | {'precision': 0.8275862068965517, 'recall': 0.8571428571428571, 'f1': 0.8421052631578947, 'number': 84} | {'precision': 0.7083333333333334, 'recall': 0.8095238095238095, 'f1': 0.7555555555555556, 'number': 21} | {'precision': 0.7385892116182573, 'recall': 0.5913621262458472, 'f1': 0.6568265682656825, 'number': 301} | {'precision': 0.8666666666666667, 'recall': 0.8666666666666667, 'f1': 0.8666666666666667, 'number': 15} | {'precision': 0.6986301369863014, 'recall': 0.9532710280373832, 'f1': 0.8063241106719368, 'number': 214} | {'precision': 0.7571428571428571, 'recall': 0.848, 'f1': 0.7999999999999999, 'number': 125} | {'precision': 0.8335548172757475, 'recall': 0.959831675592961, 'f1': 0.8922475106685633, 'number': 2614} | {'precision': 0.875, 'recall': 0.8484848484848485, 'f1': 0.8615384615384615, 'number': 33} | {'precision': 0.8, 'recall': 0.9090909090909091, 'f1': 0.8510638297872342, 'number': 66} | {'precision': 0.753731343283582, 'recall': 0.8782608695652174, 'f1': 0.8112449799196787, 'number': 115} | {'precision': 0.7142857142857143, 'recall': 0.8974358974358975, 'f1': 0.7954545454545455, 'number': 39} | 0.8082 | 0.9102 | 0.8562 | 0.8171 |
| 0.0032 | 45.0 | 1620 | 1.1056 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.625, 'recall': 0.7894736842105263, 'f1': 0.6976744186046512, 'number': 38} | {'precision': 0.8135593220338984, 'recall': 0.8135593220338984, 'f1': 0.8135593220338985, 'number': 59} | {'precision': 0.7272727272727273, 'recall': 0.8421052631578947, 'f1': 0.7804878048780488, 'number': 19} | {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f1': 0.6666666666666666, 'number': 3} | {'precision': 0.8202247191011236, 'recall': 0.8690476190476191, 'f1': 0.8439306358381503, 'number': 84} | {'precision': 0.7083333333333334, 'recall': 0.8095238095238095, 'f1': 0.7555555555555556, 'number': 21} | {'precision': 0.7416666666666667, 'recall': 0.5913621262458472, 'f1': 0.6580406654343807, 'number': 301} | {'precision': 0.8666666666666667, 'recall': 0.8666666666666667, 'f1': 0.8666666666666667, 'number': 15} | {'precision': 0.6891891891891891, 'recall': 0.9532710280373832, 'f1': 0.8, 'number': 214} | {'precision': 0.7681159420289855, 'recall': 0.848, 'f1': 0.8060836501901141, 'number': 125} | {'precision': 0.8404969778374748, 'recall': 0.9575363427697016, 'f1': 0.8952074391988555, 'number': 2614} | {'precision': 0.875, 'recall': 0.8484848484848485, 'f1': 0.8615384615384615, 'number': 33} | {'precision': 0.8, 'recall': 0.9090909090909091, 'f1': 0.8510638297872342, 'number': 66} | {'precision': 0.7769230769230769, 'recall': 0.8782608695652174, 'f1': 0.8244897959183674, 'number': 115} | {'precision': 0.7291666666666666, 'recall': 0.8974358974358975, 'f1': 0.8045977011494253, 'number': 39} | 0.8131 | 0.9094 | 0.8585 | 0.8215 |
| 0.0029 | 46.0 | 1656 | 1.1540 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.625, 'recall': 0.7894736842105263, 'f1': 0.6976744186046512, 'number': 38} | {'precision': 0.8135593220338984, 'recall': 0.8135593220338984, 'f1': 0.8135593220338985, 'number': 59} | {'precision': 0.7272727272727273, 'recall': 0.8421052631578947, 'f1': 0.7804878048780488, 'number': 19} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 3} | {'precision': 0.8111111111111111, 'recall': 0.8690476190476191, 'f1': 0.8390804597701149, 'number': 84} | {'precision': 0.7083333333333334, 'recall': 0.8095238095238095, 'f1': 0.7555555555555556, 'number': 21} | {'precision': 0.7385892116182573, 'recall': 0.5913621262458472, 'f1': 0.6568265682656825, 'number': 301} | {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15} | {'precision': 0.6938775510204082, 'recall': 0.9532710280373832, 'f1': 0.8031496062992125, 'number': 214} | {'precision': 0.75177304964539, 'recall': 0.848, 'f1': 0.7969924812030075, 'number': 125} | {'precision': 0.833666001330672, 'recall': 0.9586840091813313, 'f1': 0.8918149466192171, 'number': 2614} | {'precision': 0.8484848484848485, 'recall': 0.8484848484848485, 'f1': 0.8484848484848486, 'number': 33} | {'precision': 0.8, 'recall': 0.9090909090909091, 'f1': 0.8510638297872342, 'number': 66} | {'precision': 0.7593984962406015, 'recall': 0.8782608695652174, 'f1': 0.8145161290322581, 'number': 115} | {'precision': 0.7083333333333334, 'recall': 0.8717948717948718, 'f1': 0.7816091954022988, 'number': 39} | 0.8067 | 0.9099 | 0.8552 | 0.8158 |
| 0.0027 | 47.0 | 1692 | 1.1618 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.6122448979591837, 'recall': 0.7894736842105263, 'f1': 0.6896551724137931, 'number': 38} | {'precision': 0.8135593220338984, 'recall': 0.8135593220338984, 'f1': 0.8135593220338985, 'number': 59} | {'precision': 0.7272727272727273, 'recall': 0.8421052631578947, 'f1': 0.7804878048780488, 'number': 19} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 3} | {'precision': 0.8111111111111111, 'recall': 0.8690476190476191, 'f1': 0.8390804597701149, 'number': 84} | {'precision': 0.7083333333333334, 'recall': 0.8095238095238095, 'f1': 0.7555555555555556, 'number': 21} | {'precision': 0.7385892116182573, 'recall': 0.5913621262458472, 'f1': 0.6568265682656825, 'number': 301} | {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15} | {'precision': 0.6938775510204082, 'recall': 0.9532710280373832, 'f1': 0.8031496062992125, 'number': 214} | {'precision': 0.7412587412587412, 'recall': 0.848, 'f1': 0.791044776119403, 'number': 125} | {'precision': 0.8327248589445735, 'recall': 0.959831675592961, 'f1': 0.8917718144659676, 'number': 2614} | {'precision': 0.8484848484848485, 'recall': 0.8484848484848485, 'f1': 0.8484848484848486, 'number': 33} | {'precision': 0.7972972972972973, 'recall': 0.8939393939393939, 'f1': 0.8428571428571429, 'number': 66} | {'precision': 0.7593984962406015, 'recall': 0.8782608695652174, 'f1': 0.8145161290322581, 'number': 115} | {'precision': 0.723404255319149, 'recall': 0.8717948717948718, 'f1': 0.7906976744186047, 'number': 39} | 0.8056 | 0.9104 | 0.8548 | 0.8156 |
| 0.0029 | 48.0 | 1728 | 1.1321 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.625, 'recall': 0.7894736842105263, 'f1': 0.6976744186046512, 'number': 38} | {'precision': 0.8103448275862069, 'recall': 0.7966101694915254, 'f1': 0.8034188034188032, 'number': 59} | {'precision': 0.7272727272727273, 'recall': 0.8421052631578947, 'f1': 0.7804878048780488, 'number': 19} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 3} | {'precision': 0.8202247191011236, 'recall': 0.8690476190476191, 'f1': 0.8439306358381503, 'number': 84} | {'precision': 0.7083333333333334, 'recall': 0.8095238095238095, 'f1': 0.7555555555555556, 'number': 21} | {'precision': 0.7416666666666667, 'recall': 0.5913621262458472, 'f1': 0.6580406654343807, 'number': 301} | {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15} | {'precision': 0.6915254237288135, 'recall': 0.9532710280373832, 'f1': 0.8015717092337917, 'number': 214} | {'precision': 0.75177304964539, 'recall': 0.848, 'f1': 0.7969924812030075, 'number': 125} | {'precision': 0.8361695028361695, 'recall': 0.9586840091813313, 'f1': 0.8932454108002139, 'number': 2614} | {'precision': 0.8484848484848485, 'recall': 0.8484848484848485, 'f1': 0.8484848484848486, 'number': 33} | {'precision': 0.8, 'recall': 0.9090909090909091, 'f1': 0.8510638297872342, 'number': 66} | {'precision': 0.7593984962406015, 'recall': 0.8782608695652174, 'f1': 0.8145161290322581, 'number': 115} | {'precision': 0.723404255319149, 'recall': 0.8717948717948718, 'f1': 0.7906976744186047, 'number': 39} | 0.8087 | 0.9096 | 0.8562 | 0.8177 |
| 0.0032 | 49.0 | 1764 | 1.1474 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.6122448979591837, 'recall': 0.7894736842105263, 'f1': 0.6896551724137931, 'number': 38} | {'precision': 0.8245614035087719, 'recall': 0.7966101694915254, 'f1': 0.8103448275862069, 'number': 59} | {'precision': 0.7272727272727273, 'recall': 0.8421052631578947, 'f1': 0.7804878048780488, 'number': 19} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 3} | {'precision': 0.8222222222222222, 'recall': 0.8809523809523809, 'f1': 0.8505747126436781, 'number': 84} | {'precision': 0.7083333333333334, 'recall': 0.8095238095238095, 'f1': 0.7555555555555556, 'number': 21} | {'precision': 0.7416666666666667, 'recall': 0.5913621262458472, 'f1': 0.6580406654343807, 'number': 301} | {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15} | {'precision': 0.6915254237288135, 'recall': 0.9532710280373832, 'f1': 0.8015717092337917, 'number': 214} | {'precision': 0.75177304964539, 'recall': 0.848, 'f1': 0.7969924812030075, 'number': 125} | {'precision': 0.835109926715523, 'recall': 0.9590665646518746, 'f1': 0.8928062678062678, 'number': 2614} | {'precision': 0.8484848484848485, 'recall': 0.8484848484848485, 'f1': 0.8484848484848486, 'number': 33} | {'precision': 0.8, 'recall': 0.9090909090909091, 'f1': 0.8510638297872342, 'number': 66} | {'precision': 0.7593984962406015, 'recall': 0.8782608695652174, 'f1': 0.8145161290322581, 'number': 115} | {'precision': 0.723404255319149, 'recall': 0.8717948717948718, 'f1': 0.7906976744186047, 'number': 39} | 0.8080 | 0.9102 | 0.8561 | 0.8175 |
| 0.0023 | 50.0 | 1800 | 1.1473 | {'precision': 0.7647058823529411, 'recall': 1.0, 'f1': 0.8666666666666666, 'number': 13} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 14} | {'precision': 0.6122448979591837, 'recall': 0.7894736842105263, 'f1': 0.6896551724137931, 'number': 38} | {'precision': 0.8245614035087719, 'recall': 0.7966101694915254, 'f1': 0.8103448275862069, 'number': 59} | {'precision': 0.7272727272727273, 'recall': 0.8421052631578947, 'f1': 0.7804878048780488, 'number': 19} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 3} | {'precision': 0.8222222222222222, 'recall': 0.8809523809523809, 'f1': 0.8505747126436781, 'number': 84} | {'precision': 0.7083333333333334, 'recall': 0.8095238095238095, 'f1': 0.7555555555555556, 'number': 21} | {'precision': 0.7416666666666667, 'recall': 0.5913621262458472, 'f1': 0.6580406654343807, 'number': 301} | {'precision': 0.7647058823529411, 'recall': 0.8666666666666667, 'f1': 0.8125, 'number': 15} | {'precision': 0.6891891891891891, 'recall': 0.9532710280373832, 'f1': 0.8, 'number': 214} | {'precision': 0.75177304964539, 'recall': 0.848, 'f1': 0.7969924812030075, 'number': 125} | {'precision': 0.8356118706235411, 'recall': 0.9586840091813313, 'f1': 0.892927133440228, 'number': 2614} | {'precision': 0.875, 'recall': 0.8484848484848485, 'f1': 0.8615384615384615, 'number': 33} | {'precision': 0.8, 'recall': 0.9090909090909091, 'f1': 0.8510638297872342, 'number': 66} | {'precision': 0.7593984962406015, 'recall': 0.8782608695652174, 'f1': 0.8145161290322581, 'number': 115} | {'precision': 0.7142857142857143, 'recall': 0.8974358974358975, 'f1': 0.7954545454545455, 'number': 39} | 0.8082 | 0.9102 | 0.8562 | 0.8177 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
CLAck/vi-en | [
"pytorch",
"marian",
"text2text-generation",
"en",
"vi",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
my 2nd model
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('reachrkr/ddpm-celebahq-finetuned-butterflies-2epochs-1')
image = pipeline().images[0]
image
```
|
CLS/WubiBERT_models | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-12-31T01:04:15Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rachmanilove/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CLTL/MedRoBERTa.nl | [
"pytorch",
"roberta",
"fill-mask",
"nl",
"transformers",
"license:mit",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,988 | 2022-12-31T01:24:17Z | ---
language: en
thumbnail: http://www.huggingtweets.com/libsoftiktok/1672449970711/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1489097242321428482/sQSUN_M6_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Libs of TikTok</div>
<div style="text-align: center; font-size: 14px;">@libsoftiktok</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Libs of TikTok.
| Data | Libs of TikTok |
| --- | --- |
| Tweets downloaded | 3194 |
| Retweets | 818 |
| Short tweets | 423 |
| Tweets kept | 1953 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1qrbsjl9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @libsoftiktok's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/fu7kfwor) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/fu7kfwor/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/libsoftiktok')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
CLTL/gm-ner-xlmrbase | [
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"nl",
"transformers",
"dighum",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | 2022-12-31T01:30:03Z | ---
license: creativeml-openrail-m
---
## Hypernetwork
Effects will appear around the character image as in the Japanese mobile game.
# Recommended setting
Model:Abyss_7th_anime_v1.1
https://huggingface.co/syaimu/7th_Layer/blob/main/Abyss_7th_anime_v1.1.ckpt
Hypernetwork strength:0.25
Highres. fix:ON
Upscale latent space image when doing hires. fix:ON
Please follow this prompt:
1girl, (realistic:1.3),isometric figure, (tachi-e:1.2), (transparent background:1.4), (white background:1.4), (beautiful detailed face:1.0)
# Sample prompt
https://majinai.art/ja/i/8359WPL
<img src="https://i.imgur.com/a7M42HA.jpg" width="480" height="">
https://majinai.art/ja/i/UTeEryV
<img src="https://i.imgur.com/DvmLKdC.jpg" width="480" height="">
https://majinai.art/ja/i/1q1r8RF
<img src="https://i.imgur.com/Air3hHb.jpg" width="480" height="">
|
CLTL/icf-levels-att | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 32 | 2022-12-31T01:45:12Z | ---
license: cc-by-nc-sa-4.0
language:
- en
thumbnail: "https://huggingface.co/GeneralAwareness/Nirphoto/resolve/main/Nirphoto.png"
tags:
- stable-diffusion
- v2
- text-to-image
- image-to-image
- Embedding
---
Textual Inversion Embedding by General Awareness For SD 2.x trained on 768x768 images from various sources.
Install by downloading the .pt embedding, and put it in the \embeddings folder.
This embedding was made to simulate NIR (Near Infrared) photography (that other worldly snowy look), and it does some wild things if used as a negative prompt as well.
---
Use keyword: image in nirphoto-3000 style, nirphoto-3000 style, nirphoto-3000, in the style of nirphoto-3000, or by nirphoto-3000.
---
an elephant at night with a full moon in the background village background, nirphoto-3000

a forest with a pond ((full of ducks)) the middle, nirphoto-3000
)_the_middle_nirphoto-3000.png)
Latent Space, the final frontier into the mind of madness

Using Nirphoto-3000 as a negative prompt of the above for some 1970s trippy vibes.
 |
CLTL/icf-levels-ber | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | 2022-12-31T01:45:23Z | ---
duplicated_from: Imablank/AnythingV3-1
---
## My AnythingV3 Different Mix & Other Dreambooth Model
1. AnimeCH = Chinese Dreambooth, focus more on cel shading similar to anime, and some distinct style differ from standard AnythingV3
2. anyNhas1-4 = Blossom Extract, but i mixed hassanblend instead of F222
3. chromaNanyhas1-4 = Mix of anyNhas1-4 and ChromaV5, Focus more on Detailed backgrounds and body anatomy is consistent compared to AnythingV3
4. a1-4m = experimental mix, mix of anyNhas1-4 and MMD, combined with it's vae, similar to chromaNanyhas1-4 but way too colorful and saturated, i do not recommend. |
CLTL/icf-levels-enr | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | 2022-12-31T02:17:40Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('bobber/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
CLTL/icf-levels-etn | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | 2022-12-31T02:21:55Z | ---
language:
- en
thumbnail: "https://huggingface.co/lexaprosaic/Imperial-Diffusion/blob/main/282D6868-546F-4841-B8DC-D12BBD54E52A.jpeg"
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- safetensors
- diffusers
inference: true
---
This is a preliminary model based on the famous collection of images taken by Sergei Prokudin-Gorskii, also known as the oldest and most comprehensive set of color photography known to exist. They were taken between the years of 1909 and 1915 in pre-revolutionary Russia, and offer a glimpse into the past that few other datasets can claim.
- Based on sd 1.5 fork
- Dataset consists of 1,277 pre-composited 512px images extracted from the Library of Congress public access portal
- Trained on an A100 GPU for six hours at 2000 step epochs
- Finetuned with large-scale EveryDream yaml |
CLTL/icf-levels-mbw | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | 2022-12-31T02:29:00Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- tedlium2
license: cc-by-4.0
---
## ESPnet2 ASR model
### `pyf98/tedlium2_ctc_conformer_e12_linear2048`
This model was trained by Yifan Peng using tedlium2 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout e62de171f1d11015cb856f83780c61bd5ca7fa8f
pip install -e .
cd egs2/tedlium2/asr1
./run.sh --skip_data_prep false --skip_train true --download_model pyf98/tedlium2_ctc_conformer_e12_linear2048
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Fri Dec 30 14:56:03 CST 2022`
- python version: `3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]`
- espnet version: `espnet 202211`
- pytorch version: `pytorch 1.12.1`
- Git hash: `e62de171f1d11015cb856f83780c61bd5ca7fa8f`
- Commit date: `Thu Dec 29 14:18:44 2022 -0500`
## asr_train_asr_ctc_conformer_e12_linear2048_raw_en_bpe500_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_ctc_asr_model_valid.cer_ctc.ave/dev|466|14671|92.4|5.4|2.2|1.2|8.9|75.1|
|decode_asr_ctc_asr_model_valid.cer_ctc.ave/test|1155|27500|92.6|5.0|2.5|1.1|8.5|70.3|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_ctc_asr_model_valid.cer_ctc.ave/dev|466|78259|97.0|0.9|2.1|1.2|4.2|75.1|
|decode_asr_ctc_asr_model_valid.cer_ctc.ave/test|1155|145066|97.0|0.9|2.1|1.2|4.2|70.3|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_ctc_asr_model_valid.cer_ctc.ave/dev|466|28296|94.6|3.1|2.4|1.2|6.6|75.1|
|decode_asr_ctc_asr_model_valid.cer_ctc.ave/test|1155|52113|94.9|2.7|2.4|1.2|6.3|70.3|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_ctc_conformer_e12_linear2048.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_ctc_conformer_e12_linear2048_raw_en_bpe500_sp
ngpu: 1
seed: 2022
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 47181
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- cer_ctc
- min
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 50000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe500_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe500_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe500_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe500_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- s
- ▁the
- t
- ▁a
- ▁and
- ▁to
- d
- e
- ▁of
- ''''
- n
- ing
- ▁in
- ▁i
- ▁that
- i
- a
- l
- p
- m
- y
- o
- ▁it
- ▁we
- c
- u
- ▁you
- ed
- ▁
- r
- ▁is
- re
- ▁this
- ar
- g
- ▁so
- al
- b
- ▁s
- or
- ▁f
- ▁c
- in
- k
- f
- ▁for
- ic
- er
- le
- ▁be
- ▁do
- ▁re
- ve
- ▁e
- ▁w
- ▁was
- es
- ▁they
- ly
- h
- ▁on
- v
- ▁are
- ri
- ▁have
- an
- ▁what
- ▁with
- ▁t
- w
- ur
- it
- ent
- ▁can
- ▁he
- ▁but
- ra
- ce
- ▁me
- ▁b
- ▁ma
- ▁p
- ll
- ▁st
- ▁one
- 'on'
- ▁about
- th
- ▁de
- en
- ▁all
- ▁not
- il
- ▁g
- ch
- at
- ▁there
- ▁mo
- ter
- ation
- tion
- ▁at
- ▁my
- ro
- ▁as
- te
- ▁le
- ▁con
- ▁like
- ▁people
- ▁or
- ▁an
- el
- ▁if
- ▁from
- ver
- ▁su
- ▁co
- ate
- ▁these
- ol
- ci
- ▁now
- ▁see
- ▁out
- ▁our
- ion
- ▁know
- ect
- ▁just
- as
- ▁ex
- ▁ch
- ▁d
- ▁when
- ▁very
- ▁think
- ▁who
- ▁because
- ▁go
- ▁up
- ▁us
- ▁pa
- ▁no
- ies
- ▁di
- ▁ho
- om
- ive
- ▁get
- id
- ▁o
- ▁hi
- un
- ▁how
- ▁by
- ir
- et
- ck
- ity
- ▁po
- ul
- ▁which
- ▁mi
- ▁some
- z
- ▁sp
- ▁un
- ▁going
- ▁pro
- ist
- ▁se
- ▁look
- ▁time
- ment
- de
- ▁more
- ▁had
- ng
- ▁would
- ge
- la
- ▁here
- ▁really
- x
- ▁your
- ▁them
- us
- me
- ▁en
- ▁two
- ▁k
- ▁li
- ▁world
- ne
- ow
- ▁way
- ▁want
- ▁work
- ▁don
- ▁lo
- ▁fa
- ▁were
- ▁their
- age
- vi
- ▁ha
- ac
- der
- est
- ▁bo
- am
- ▁other
- able
- ▁actually
- ▁sh
- ▁make
- ▁ba
- ▁la
- ine
- ▁into
- ▁where
- ▁could
- ▁comp
- ting
- ▁has
- ▁will
- ▁ne
- j
- ical
- ally
- ▁vi
- ▁things
- ▁te
- igh
- ▁say
- ▁years
- ers
- ▁ra
- ther
- ▁than
- ru
- ▁ro
- op
- ▁did
- ▁any
- ▁new
- ound
- ig
- ▁well
- mo
- ▁she
- ▁na
- ▁been
- he
- ▁thousand
- ▁car
- ▁take
- ▁right
- ▁then
- ▁need
- ▁start
- ▁hundred
- ▁something
- ▁over
- ▁com
- ia
- ▁kind
- um
- if
- ▁those
- ▁first
- ▁pre
- ta
- ▁said
- ize
- end
- ▁even
- ▁thing
- one
- ▁back
- ite
- ▁every
- ▁little
- ry
- ▁life
- ▁much
- ke
- ▁also
- ▁most
- ant
- per
- ▁three
- ▁come
- ▁lot
- ance
- ▁got
- ▁talk
- ▁per
- ▁inter
- ▁sa
- ▁use
- ▁mu
- ▁part
- ish
- ence
- ▁happen
- ▁bi
- ▁mean
- ough
- ▁qu
- ▁bu
- ▁day
- ▁ga
- ▁only
- ▁many
- ▁different
- ▁dr
- ▁th
- ▁show
- ful
- ▁down
- ated
- ▁good
- ▁tra
- ▁around
- ▁idea
- ▁human
- ous
- ▁put
- ▁through
- ▁five
- ▁why
- ▁change
- ▁real
- ff
- ible
- ▁fact
- ▁same
- ▁jo
- ▁live
- ▁year
- ▁problem
- ▁ph
- ▁four
- ▁give
- ▁big
- ▁tell
- ▁great
- ▁try
- ▁va
- ▁ru
- ▁system
- ▁six
- ▁plan
- ▁place
- ▁build
- ▁called
- ▁again
- ▁point
- ▁twenty
- ▁percent
- ▁nine
- ▁find
- ▁app
- ▁after
- ▁long
- ▁eight
- ▁imp
- ▁gene
- ▁design
- ▁today
- ▁should
- ▁made
- ious
- ▁came
- ▁learn
- ▁last
- ▁own
- way
- ▁turn
- ▁seven
- ▁high
- ▁question
- ▁person
- ▁brain
- ▁important
- ▁another
- ▁thought
- ▁trans
- ▁create
- ness
- ▁hu
- ▁power
- ▁act
- land
- ▁play
- ▁sort
- ▁old
- ▁before
- ▁course
- ▁understand
- ▁feel
- ▁might
- ▁each
- ▁million
- ▁better
- ▁together
- ▁ago
- ▁example
- ▁help
- ▁story
- ▁next
- ▁hand
- ▁school
- ▁water
- ▁develop
- ▁technology
- que
- ▁second
- ▁grow
- ▁still
- ▁cell
- ▁believe
- ▁number
- ▁small
- ▁between
- qui
- ▁data
- ▁become
- ▁america
- ▁maybe
- ▁space
- ▁project
- ▁organ
- ▁vo
- ▁children
- ▁book
- graph
- ▁open
- ▁fifty
- ▁picture
- ▁health
- ▁thirty
- ▁africa
- ▁reason
- ▁large
- ▁hard
- ▁computer
- ▁always
- ▁sense
- ▁money
- ▁women
- ▁everything
- ▁information
- ▁country
- ▁teach
- ▁energy
- ▁experience
- ▁food
- ▁process
- qua
- ▁interesting
- ▁future
- ▁science
- q
- '0'
- '5'
- '6'
- '9'
- '3'
- '8'
- '4'
- N
- A
- '7'
- S
- G
- F
- R
- L
- U
- E
- T
- H
- _
- B
- D
- J
- M
- ă
- ō
- ť
- '2'
- '-'
- '1'
- C
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram500/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe500_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 1.0
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf: {}
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202211'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
CLTL/icf-levels-stm | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 32 | null | ---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-tiny-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-v2
This model is a fine-tuned version of [roseman/whisper-tiny-ckb](https://huggingface.co/roseman/whisper-tiny-ckb) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0055
- Wer: 50.9922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 70000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.0181 | 4.0 | 10000 | 0.6723 | 54.6795 |
| 0.0039 | 8.01 | 20000 | 0.7814 | 52.9841 |
| 0.0025 | 12.01 | 30000 | 0.8667 | 52.7750 |
| 0.0002 | 16.02 | 40000 | 0.9316 | 52.5431 |
| 0.0009 | 20.02 | 50000 | 0.9454 | 51.6042 |
| 0.0 | 24.02 | 60000 | 0.9938 | 51.0872 |
| 0.0 | 28.03 | 70000 | 1.0055 | 50.9922 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
CM-CA/Cartman | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-12-31T02:37:12Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- tedlium2
license: cc-by-4.0
---
## ESPnet2 ASR model
### `pyf98/tedlium2_ctc_conformer_e15_linear1024`
This model was trained by Yifan Peng using tedlium2 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout e62de171f1d11015cb856f83780c61bd5ca7fa8f
pip install -e .
cd egs2/tedlium2/asr1
./run.sh --skip_data_prep false --skip_train true --download_model pyf98/tedlium2_ctc_conformer_e15_linear1024
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Fri Dec 30 08:37:09 CST 2022`
- python version: `3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]`
- espnet version: `espnet 202211`
- pytorch version: `pytorch 1.12.1`
- Git hash: `e62de171f1d11015cb856f83780c61bd5ca7fa8f`
- Commit date: `Thu Dec 29 14:18:44 2022 -0500`
## asr_train_asr_ctc_conformer_e15_linear1024_raw_en_bpe500_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_ctc_asr_model_valid.cer_ctc.ave/dev|466|14671|92.2|5.6|2.2|1.2|9.1|75.3|
|decode_asr_ctc_asr_model_valid.cer_ctc.ave/test|1155|27500|92.1|5.4|2.5|1.1|9.0|72.8|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_ctc_asr_model_valid.cer_ctc.ave/dev|466|78259|97.0|0.9|2.1|1.2|4.2|75.3|
|decode_asr_ctc_asr_model_valid.cer_ctc.ave/test|1155|145066|96.9|0.9|2.2|1.2|4.3|72.8|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_ctc_asr_model_valid.cer_ctc.ave/dev|466|28296|94.5|3.1|2.4|1.2|6.7|75.3|
|decode_asr_ctc_asr_model_valid.cer_ctc.ave/test|1155|52113|94.6|2.9|2.5|1.2|6.5|72.8|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_ctc_conformer_e15_linear1024.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_ctc_conformer_e15_linear1024_raw_en_bpe500_sp
ngpu: 1
seed: 2022
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 53439
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- cer_ctc
- min
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 50000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe500_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe500_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe500_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe500_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- s
- ▁the
- t
- ▁a
- ▁and
- ▁to
- d
- e
- ▁of
- ''''
- n
- ing
- ▁in
- ▁i
- ▁that
- i
- a
- l
- p
- m
- y
- o
- ▁it
- ▁we
- c
- u
- ▁you
- ed
- ▁
- r
- ▁is
- re
- ▁this
- ar
- g
- ▁so
- al
- b
- ▁s
- or
- ▁f
- ▁c
- in
- k
- f
- ▁for
- ic
- er
- le
- ▁be
- ▁do
- ▁re
- ve
- ▁e
- ▁w
- ▁was
- es
- ▁they
- ly
- h
- ▁on
- v
- ▁are
- ri
- ▁have
- an
- ▁what
- ▁with
- ▁t
- w
- ur
- it
- ent
- ▁can
- ▁he
- ▁but
- ra
- ce
- ▁me
- ▁b
- ▁ma
- ▁p
- ll
- ▁st
- ▁one
- 'on'
- ▁about
- th
- ▁de
- en
- ▁all
- ▁not
- il
- ▁g
- ch
- at
- ▁there
- ▁mo
- ter
- ation
- tion
- ▁at
- ▁my
- ro
- ▁as
- te
- ▁le
- ▁con
- ▁like
- ▁people
- ▁or
- ▁an
- el
- ▁if
- ▁from
- ver
- ▁su
- ▁co
- ate
- ▁these
- ol
- ci
- ▁now
- ▁see
- ▁out
- ▁our
- ion
- ▁know
- ect
- ▁just
- as
- ▁ex
- ▁ch
- ▁d
- ▁when
- ▁very
- ▁think
- ▁who
- ▁because
- ▁go
- ▁up
- ▁us
- ▁pa
- ▁no
- ies
- ▁di
- ▁ho
- om
- ive
- ▁get
- id
- ▁o
- ▁hi
- un
- ▁how
- ▁by
- ir
- et
- ck
- ity
- ▁po
- ul
- ▁which
- ▁mi
- ▁some
- z
- ▁sp
- ▁un
- ▁going
- ▁pro
- ist
- ▁se
- ▁look
- ▁time
- ment
- de
- ▁more
- ▁had
- ng
- ▁would
- ge
- la
- ▁here
- ▁really
- x
- ▁your
- ▁them
- us
- me
- ▁en
- ▁two
- ▁k
- ▁li
- ▁world
- ne
- ow
- ▁way
- ▁want
- ▁work
- ▁don
- ▁lo
- ▁fa
- ▁were
- ▁their
- age
- vi
- ▁ha
- ac
- der
- est
- ▁bo
- am
- ▁other
- able
- ▁actually
- ▁sh
- ▁make
- ▁ba
- ▁la
- ine
- ▁into
- ▁where
- ▁could
- ▁comp
- ting
- ▁has
- ▁will
- ▁ne
- j
- ical
- ally
- ▁vi
- ▁things
- ▁te
- igh
- ▁say
- ▁years
- ers
- ▁ra
- ther
- ▁than
- ru
- ▁ro
- op
- ▁did
- ▁any
- ▁new
- ound
- ig
- ▁well
- mo
- ▁she
- ▁na
- ▁been
- he
- ▁thousand
- ▁car
- ▁take
- ▁right
- ▁then
- ▁need
- ▁start
- ▁hundred
- ▁something
- ▁over
- ▁com
- ia
- ▁kind
- um
- if
- ▁those
- ▁first
- ▁pre
- ta
- ▁said
- ize
- end
- ▁even
- ▁thing
- one
- ▁back
- ite
- ▁every
- ▁little
- ry
- ▁life
- ▁much
- ke
- ▁also
- ▁most
- ant
- per
- ▁three
- ▁come
- ▁lot
- ance
- ▁got
- ▁talk
- ▁per
- ▁inter
- ▁sa
- ▁use
- ▁mu
- ▁part
- ish
- ence
- ▁happen
- ▁bi
- ▁mean
- ough
- ▁qu
- ▁bu
- ▁day
- ▁ga
- ▁only
- ▁many
- ▁different
- ▁dr
- ▁th
- ▁show
- ful
- ▁down
- ated
- ▁good
- ▁tra
- ▁around
- ▁idea
- ▁human
- ous
- ▁put
- ▁through
- ▁five
- ▁why
- ▁change
- ▁real
- ff
- ible
- ▁fact
- ▁same
- ▁jo
- ▁live
- ▁year
- ▁problem
- ▁ph
- ▁four
- ▁give
- ▁big
- ▁tell
- ▁great
- ▁try
- ▁va
- ▁ru
- ▁system
- ▁six
- ▁plan
- ▁place
- ▁build
- ▁called
- ▁again
- ▁point
- ▁twenty
- ▁percent
- ▁nine
- ▁find
- ▁app
- ▁after
- ▁long
- ▁eight
- ▁imp
- ▁gene
- ▁design
- ▁today
- ▁should
- ▁made
- ious
- ▁came
- ▁learn
- ▁last
- ▁own
- way
- ▁turn
- ▁seven
- ▁high
- ▁question
- ▁person
- ▁brain
- ▁important
- ▁another
- ▁thought
- ▁trans
- ▁create
- ness
- ▁hu
- ▁power
- ▁act
- land
- ▁play
- ▁sort
- ▁old
- ▁before
- ▁course
- ▁understand
- ▁feel
- ▁might
- ▁each
- ▁million
- ▁better
- ▁together
- ▁ago
- ▁example
- ▁help
- ▁story
- ▁next
- ▁hand
- ▁school
- ▁water
- ▁develop
- ▁technology
- que
- ▁second
- ▁grow
- ▁still
- ▁cell
- ▁believe
- ▁number
- ▁small
- ▁between
- qui
- ▁data
- ▁become
- ▁america
- ▁maybe
- ▁space
- ▁project
- ▁organ
- ▁vo
- ▁children
- ▁book
- graph
- ▁open
- ▁fifty
- ▁picture
- ▁health
- ▁thirty
- ▁africa
- ▁reason
- ▁large
- ▁hard
- ▁computer
- ▁always
- ▁sense
- ▁money
- ▁women
- ▁everything
- ▁information
- ▁country
- ▁teach
- ▁energy
- ▁experience
- ▁food
- ▁process
- qua
- ▁interesting
- ▁future
- ▁science
- q
- '0'
- '5'
- '6'
- '9'
- '3'
- '8'
- '4'
- N
- A
- '7'
- S
- G
- F
- R
- L
- U
- E
- T
- H
- _
- B
- D
- J
- M
- ă
- ō
- ť
- '2'
- '-'
- '1'
- C
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram500/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe500_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 1.0
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 15
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf: {}
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202211'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
CM-CA/DialoGPT-small-cartman | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-12-31T02:42:02Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- tedlium2
license: cc-by-4.0
---
## ESPnet2 ASR model
### `pyf98/tedlium2_ctc_e_branchformer`
This model was trained by Yifan Peng using tedlium2 recipe in [espnet](https://github.com/espnet/espnet/).
References:
- [E-Branchformer: Branchformer with Enhanced merging for speech recognition (SLT 2022)](https://arxiv.org/abs/2210.00077)
- [Branchformer: Parallel MLP-Attention Architectures to Capture Local and Global Context for Speech Recognition and Understanding (ICML 2022)](https://proceedings.mlr.press/v162/peng22a.html)
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout e62de171f1d11015cb856f83780c61bd5ca7fa8f
pip install -e .
cd egs2/tedlium2/asr1
./run.sh --skip_data_prep false --skip_train true --download_model pyf98/tedlium2_ctc_e_branchformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Fri Dec 30 20:15:46 CST 2022`
- python version: `3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]`
- espnet version: `espnet 202211`
- pytorch version: `pytorch 1.12.1`
- Git hash: `e62de171f1d11015cb856f83780c61bd5ca7fa8f`
- Commit date: `Thu Dec 29 14:18:44 2022 -0500`
## asr_train_asr_ctc_e_branchformer_e12_mlp1024_linear1024_raw_en_bpe500_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_ctc_asr_model_valid.cer_ctc.ave/dev|466|14671|92.5|5.5|2.0|1.2|8.7|77.3|
|decode_asr_ctc_asr_model_valid.cer_ctc.ave/test|1155|27500|92.7|4.9|2.3|1.1|8.3|70.6|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_ctc_asr_model_valid.cer_ctc.ave/dev|466|78259|97.2|0.9|1.9|1.2|4.0|77.3|
|decode_asr_ctc_asr_model_valid.cer_ctc.ave/test|1155|145066|97.1|0.9|2.0|1.1|4.0|70.6|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_ctc_asr_model_valid.cer_ctc.ave/dev|466|28296|94.7|3.1|2.2|1.2|6.5|77.3|
|decode_asr_ctc_asr_model_valid.cer_ctc.ave/test|1155|52113|95.0|2.7|2.2|1.1|6.1|70.6|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_ctc_e_branchformer_e12_mlp1024_linear1024.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_ctc_e_branchformer_e12_mlp1024_linear1024_raw_en_bpe500_sp
ngpu: 1
seed: 2022
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 47545
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- cer_ctc
- min
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 50000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe500_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe500_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe500_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe500_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- s
- ▁the
- t
- ▁a
- ▁and
- ▁to
- d
- e
- ▁of
- ''''
- n
- ing
- ▁in
- ▁i
- ▁that
- i
- a
- l
- p
- m
- y
- o
- ▁it
- ▁we
- c
- u
- ▁you
- ed
- ▁
- r
- ▁is
- re
- ▁this
- ar
- g
- ▁so
- al
- b
- ▁s
- or
- ▁f
- ▁c
- in
- k
- f
- ▁for
- ic
- er
- le
- ▁be
- ▁do
- ▁re
- ve
- ▁e
- ▁w
- ▁was
- es
- ▁they
- ly
- h
- ▁on
- v
- ▁are
- ri
- ▁have
- an
- ▁what
- ▁with
- ▁t
- w
- ur
- it
- ent
- ▁can
- ▁he
- ▁but
- ra
- ce
- ▁me
- ▁b
- ▁ma
- ▁p
- ll
- ▁st
- ▁one
- 'on'
- ▁about
- th
- ▁de
- en
- ▁all
- ▁not
- il
- ▁g
- ch
- at
- ▁there
- ▁mo
- ter
- ation
- tion
- ▁at
- ▁my
- ro
- ▁as
- te
- ▁le
- ▁con
- ▁like
- ▁people
- ▁or
- ▁an
- el
- ▁if
- ▁from
- ver
- ▁su
- ▁co
- ate
- ▁these
- ol
- ci
- ▁now
- ▁see
- ▁out
- ▁our
- ion
- ▁know
- ect
- ▁just
- as
- ▁ex
- ▁ch
- ▁d
- ▁when
- ▁very
- ▁think
- ▁who
- ▁because
- ▁go
- ▁up
- ▁us
- ▁pa
- ▁no
- ies
- ▁di
- ▁ho
- om
- ive
- ▁get
- id
- ▁o
- ▁hi
- un
- ▁how
- ▁by
- ir
- et
- ck
- ity
- ▁po
- ul
- ▁which
- ▁mi
- ▁some
- z
- ▁sp
- ▁un
- ▁going
- ▁pro
- ist
- ▁se
- ▁look
- ▁time
- ment
- de
- ▁more
- ▁had
- ng
- ▁would
- ge
- la
- ▁here
- ▁really
- x
- ▁your
- ▁them
- us
- me
- ▁en
- ▁two
- ▁k
- ▁li
- ▁world
- ne
- ow
- ▁way
- ▁want
- ▁work
- ▁don
- ▁lo
- ▁fa
- ▁were
- ▁their
- age
- vi
- ▁ha
- ac
- der
- est
- ▁bo
- am
- ▁other
- able
- ▁actually
- ▁sh
- ▁make
- ▁ba
- ▁la
- ine
- ▁into
- ▁where
- ▁could
- ▁comp
- ting
- ▁has
- ▁will
- ▁ne
- j
- ical
- ally
- ▁vi
- ▁things
- ▁te
- igh
- ▁say
- ▁years
- ers
- ▁ra
- ther
- ▁than
- ru
- ▁ro
- op
- ▁did
- ▁any
- ▁new
- ound
- ig
- ▁well
- mo
- ▁she
- ▁na
- ▁been
- he
- ▁thousand
- ▁car
- ▁take
- ▁right
- ▁then
- ▁need
- ▁start
- ▁hundred
- ▁something
- ▁over
- ▁com
- ia
- ▁kind
- um
- if
- ▁those
- ▁first
- ▁pre
- ta
- ▁said
- ize
- end
- ▁even
- ▁thing
- one
- ▁back
- ite
- ▁every
- ▁little
- ry
- ▁life
- ▁much
- ke
- ▁also
- ▁most
- ant
- per
- ▁three
- ▁come
- ▁lot
- ance
- ▁got
- ▁talk
- ▁per
- ▁inter
- ▁sa
- ▁use
- ▁mu
- ▁part
- ish
- ence
- ▁happen
- ▁bi
- ▁mean
- ough
- ▁qu
- ▁bu
- ▁day
- ▁ga
- ▁only
- ▁many
- ▁different
- ▁dr
- ▁th
- ▁show
- ful
- ▁down
- ated
- ▁good
- ▁tra
- ▁around
- ▁idea
- ▁human
- ous
- ▁put
- ▁through
- ▁five
- ▁why
- ▁change
- ▁real
- ff
- ible
- ▁fact
- ▁same
- ▁jo
- ▁live
- ▁year
- ▁problem
- ▁ph
- ▁four
- ▁give
- ▁big
- ▁tell
- ▁great
- ▁try
- ▁va
- ▁ru
- ▁system
- ▁six
- ▁plan
- ▁place
- ▁build
- ▁called
- ▁again
- ▁point
- ▁twenty
- ▁percent
- ▁nine
- ▁find
- ▁app
- ▁after
- ▁long
- ▁eight
- ▁imp
- ▁gene
- ▁design
- ▁today
- ▁should
- ▁made
- ious
- ▁came
- ▁learn
- ▁last
- ▁own
- way
- ▁turn
- ▁seven
- ▁high
- ▁question
- ▁person
- ▁brain
- ▁important
- ▁another
- ▁thought
- ▁trans
- ▁create
- ness
- ▁hu
- ▁power
- ▁act
- land
- ▁play
- ▁sort
- ▁old
- ▁before
- ▁course
- ▁understand
- ▁feel
- ▁might
- ▁each
- ▁million
- ▁better
- ▁together
- ▁ago
- ▁example
- ▁help
- ▁story
- ▁next
- ▁hand
- ▁school
- ▁water
- ▁develop
- ▁technology
- que
- ▁second
- ▁grow
- ▁still
- ▁cell
- ▁believe
- ▁number
- ▁small
- ▁between
- qui
- ▁data
- ▁become
- ▁america
- ▁maybe
- ▁space
- ▁project
- ▁organ
- ▁vo
- ▁children
- ▁book
- graph
- ▁open
- ▁fifty
- ▁picture
- ▁health
- ▁thirty
- ▁africa
- ▁reason
- ▁large
- ▁hard
- ▁computer
- ▁always
- ▁sense
- ▁money
- ▁women
- ▁everything
- ▁information
- ▁country
- ▁teach
- ▁energy
- ▁experience
- ▁food
- ▁process
- qua
- ▁interesting
- ▁future
- ▁science
- q
- '0'
- '5'
- '6'
- '9'
- '3'
- '8'
- '4'
- N
- A
- '7'
- S
- G
- F
- R
- L
- U
- E
- T
- H
- _
- B
- D
- J
- M
- ă
- ō
- ť
- '2'
- '-'
- '1'
- C
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram500/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe500_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 1.0
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: e_branchformer
encoder_conf:
output_size: 256
attention_heads: 4
attention_layer_type: rel_selfattn
pos_enc_layer_type: rel_pos
rel_pos_type: latest
cgmlp_linear_units: 1024
cgmlp_conv_kernel: 31
use_linear_after_conv: false
gate_activation: identity
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
layer_drop_rate: 0.0
linear_units: 1024
positionwise_layer_type: linear
use_ffn: true
macaron_ffn: true
merge_conv_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf: {}
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202211'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
CNT-UPenn/RoBERTa_for_seizureFrequency_QA | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-IAM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-IAM
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9814
- Accuracy: 0.5103
- F1: 0.4950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.5871 | 1.0 | 15 | 1.4971 | 0.3379 | 0.1821 |
| 1.4995 | 2.0 | 30 | 1.4588 | 0.3379 | 0.1707 |
| 1.464 | 3.0 | 45 | 1.4251 | 0.3655 | 0.2870 |
| 1.4105 | 4.0 | 60 | 1.4027 | 0.3793 | 0.2899 |
| 1.4269 | 5.0 | 75 | 1.3798 | 0.3793 | 0.2899 |
| 1.3835 | 6.0 | 90 | 1.3425 | 0.3724 | 0.3087 |
| 1.3885 | 7.0 | 105 | 1.3041 | 0.4069 | 0.3515 |
| 1.3286 | 8.0 | 120 | 1.3004 | 0.4621 | 0.4450 |
| 1.3572 | 9.0 | 135 | 1.2621 | 0.4345 | 0.3903 |
| 1.3176 | 10.0 | 150 | 1.2033 | 0.4552 | 0.4250 |
| 1.2509 | 11.0 | 165 | 1.1942 | 0.5034 | 0.4755 |
| 1.2781 | 12.0 | 180 | 1.1689 | 0.4828 | 0.4651 |
| 1.2156 | 13.0 | 195 | 1.1438 | 0.5034 | 0.4837 |
| 1.1518 | 14.0 | 210 | 1.1187 | 0.5034 | 0.4844 |
| 1.161 | 15.0 | 225 | 1.1013 | 0.5034 | 0.4858 |
| 1.1377 | 16.0 | 240 | 1.0882 | 0.5034 | 0.4796 |
| 1.1634 | 17.0 | 255 | 1.0692 | 0.5034 | 0.4860 |
| 1.0666 | 18.0 | 270 | 1.0591 | 0.5034 | 0.4772 |
| 1.1358 | 19.0 | 285 | 1.0455 | 0.5034 | 0.4736 |
| 1.1118 | 20.0 | 300 | 1.0313 | 0.5034 | 0.4872 |
| 1.0367 | 21.0 | 315 | 1.0228 | 0.5034 | 0.4853 |
| 1.0781 | 22.0 | 330 | 1.0106 | 0.5034 | 0.4857 |
| 1.0346 | 23.0 | 345 | 1.0034 | 0.5034 | 0.4935 |
| 1.1015 | 24.0 | 360 | 1.0032 | 0.5034 | 0.4806 |
| 1.0147 | 25.0 | 375 | 0.9911 | 0.5103 | 0.4903 |
| 1.0144 | 26.0 | 390 | 0.9856 | 0.5103 | 0.4972 |
| 1.022 | 27.0 | 405 | 0.9835 | 0.5103 | 0.4982 |
| 1.0218 | 28.0 | 420 | 0.9821 | 0.5103 | 0.4955 |
| 1.0173 | 29.0 | 435 | 0.9811 | 0.5103 | 0.4950 |
| 1.0241 | 30.0 | 450 | 0.9814 | 0.5103 | 0.4950 |
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.0
- Datasets 2.10.1
- Tokenizers 0.11.0
|
CSZay/bart | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-12-31T02:55:08Z | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- animal
widget:
- text: a photo of LeDude dog in the Acropolis
---
# DreamBooth model for the LeDude concept trained by Antiraedus on the Antiraedus/Dude dataset.
This is a Stable Diffusion model fine-tuned on the LeDude concept with DreamBooth, which is my 10 year old Australian Silky terrier.
It can be used by modifying the `instance_prompt`: **a photo of LeDude dog**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `dog` images for the animal theme.
## Original

## Example

## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('Antiraedus/LeDude-dog')
image = pipeline().images[0]
image
```
|
Calamarii/calamari | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-12-31T03:43:09Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: fr
datasets:
- lmqg/qag_frquad
pipeline_tag: text2text-generation
tags:
- questions and answers generation
widget:
- text: "Créateur » (Maker), lui aussi au singulier, « le Suprême Berger » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc."
example_title: "Questions & Answers Generation Example 1"
model-index:
- name: lmqg/mt5-base-frquad-qag
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qag_frquad
type: default
args: default
metrics:
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation)
type: qa_aligned_f1_score_bertscore_question_answer_generation
value: 78.28
- name: QAAlignedRecall-BERTScore (Question & Answer Generation)
type: qa_aligned_recall_bertscore_question_answer_generation
value: 78.21
- name: QAAlignedPrecision-BERTScore (Question & Answer Generation)
type: qa_aligned_precision_bertscore_question_answer_generation
value: 78.36
- name: QAAlignedF1Score-MoverScore (Question & Answer Generation)
type: qa_aligned_f1_score_moverscore_question_answer_generation
value: 51.66
- name: QAAlignedRecall-MoverScore (Question & Answer Generation)
type: qa_aligned_recall_moverscore_question_answer_generation
value: 51.59
- name: QAAlignedPrecision-MoverScore (Question & Answer Generation)
type: qa_aligned_precision_moverscore_question_answer_generation
value: 51.73
---
# Model Card of `lmqg/mt5-base-frquad-qag`
This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question & answer pair generation task on the [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base)
- **Language:** fr
- **Training data:** [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="fr", model="lmqg/mt5-base-frquad-qag")
# model prediction
question_answer_pairs = model.generate_qa("Créateur » (Maker), lui aussi au singulier, « le Suprême Berger » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mt5-base-frquad-qag")
output = pipe("Créateur » (Maker), lui aussi au singulier, « le Suprême Berger » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.")
```
## Evaluation
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-frquad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_frquad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-------------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 78.28 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
| QAAlignedF1Score (MoverScore) | 51.66 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
| QAAlignedPrecision (BERTScore) | 78.36 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
| QAAlignedPrecision (MoverScore) | 51.73 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
| QAAlignedRecall (BERTScore) | 78.21 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
| QAAlignedRecall (MoverScore) | 51.59 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qag_frquad
- dataset_name: default
- input_types: ['paragraph']
- output_types: ['questions_answers']
- prefix_types: None
- model: google/mt5-base
- max_length: 512
- max_length_output: 256
- epoch: 11
- batch: 8
- lr: 0.001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 16
- label_smoothing: 0.0
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-frquad-qag/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
CallumRai/HansardGPT2 | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | 2022-12-31T03:45:38Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Jackmin108/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Cameron/BERT-Jigsaw | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 35 | 2022-12-31T03:49:01Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: de
datasets:
- lmqg/qag_dequad
pipeline_tag: text2text-generation
tags:
- questions and answers generation
widget:
- text: "Empfangs- und Sendeantenne sollen in ihrer Polarisation übereinstimmen, andernfalls wird die Signalübertragung stark gedämpft. "
example_title: "Questions & Answers Generation Example 1"
model-index:
- name: lmqg/mt5-base-dequad-qag
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qag_dequad
type: default
args: default
metrics:
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation)
type: qa_aligned_f1_score_bertscore_question_answer_generation
value: 0.1
- name: QAAlignedRecall-BERTScore (Question & Answer Generation)
type: qa_aligned_recall_bertscore_question_answer_generation
value: 0.1
- name: QAAlignedPrecision-BERTScore (Question & Answer Generation)
type: qa_aligned_precision_bertscore_question_answer_generation
value: 0.1
- name: QAAlignedF1Score-MoverScore (Question & Answer Generation)
type: qa_aligned_f1_score_moverscore_question_answer_generation
value: 0.1
- name: QAAlignedRecall-MoverScore (Question & Answer Generation)
type: qa_aligned_recall_moverscore_question_answer_generation
value: 0.1
- name: QAAlignedPrecision-MoverScore (Question & Answer Generation)
type: qa_aligned_precision_moverscore_question_answer_generation
value: 0.1
---
# Model Card of `lmqg/mt5-base-dequad-qag`
This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question & answer pair generation task on the [lmqg/qag_dequad](https://huggingface.co/datasets/lmqg/qag_dequad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base)
- **Language:** de
- **Training data:** [lmqg/qag_dequad](https://huggingface.co/datasets/lmqg/qag_dequad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="de", model="lmqg/mt5-base-dequad-qag")
# model prediction
question_answer_pairs = model.generate_qa("das erste weltweit errichtete Hermann Brehmer 1855 im niederschlesischen ''Görbersdorf'' (heute Sokołowsko, Polen).")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mt5-base-dequad-qag")
output = pipe("Empfangs- und Sendeantenne sollen in ihrer Polarisation übereinstimmen, andernfalls wird die Signalübertragung stark gedämpft. ")
```
## Evaluation
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-dequad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_dequad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-------------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 0.1 | default | [lmqg/qag_dequad](https://huggingface.co/datasets/lmqg/qag_dequad) |
| QAAlignedF1Score (MoverScore) | 0.1 | default | [lmqg/qag_dequad](https://huggingface.co/datasets/lmqg/qag_dequad) |
| QAAlignedPrecision (BERTScore) | 0.1 | default | [lmqg/qag_dequad](https://huggingface.co/datasets/lmqg/qag_dequad) |
| QAAlignedPrecision (MoverScore) | 0.1 | default | [lmqg/qag_dequad](https://huggingface.co/datasets/lmqg/qag_dequad) |
| QAAlignedRecall (BERTScore) | 0.1 | default | [lmqg/qag_dequad](https://huggingface.co/datasets/lmqg/qag_dequad) |
| QAAlignedRecall (MoverScore) | 0.1 | default | [lmqg/qag_dequad](https://huggingface.co/datasets/lmqg/qag_dequad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qag_dequad
- dataset_name: default
- input_types: ['paragraph']
- output_types: ['questions_answers']
- prefix_types: None
- model: google/mt5-base
- max_length: 512
- max_length_output: 256
- epoch: 11
- batch: 2
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 32
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-dequad-qag/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
Cameron/BERT-SBIC-offensive | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | 2022-12-31T03:54:13Z | ---
tags:
- espnet
- audio
- self-supervised-learning
language: en
datasets:
- librispeech
license: cc-by-4.0
---
## ESPnet2 SSL model
### `simpleoier/simpleoier_librispeech_hubert_iter0_train_ssl_torchaudiohubert_base_960h_pretrain_it0_raw`
This model was trained by simpleoier using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 753f40d61813436d4e76660904d02eaed7a6649e
pip install -e .
cd egs2/librispeech/ssl1
./run.sh --skip_data_prep false --skip_train true --download_model simpleoier/simpleoier_librispeech_hubert_iter0_train_ssl_torchaudiohubert_base_960h_pretrain_it0_raw
```
## SSL config
<details><summary>expand</summary>
```
config: conf/tuning/train_ssl_torchaudiohubert_base_960h_pretrain_it0.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/hubert_iter0_train_ssl_torchaudiohubert_base_960h_pretrain_it0_raw
ngpu: 1
seed: 0
num_workers: 64
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 45091
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 250
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 48000000
valid_batch_bins: null
train_shape_file:
- exp/hubert_iter0_stats_raw/train/speech_shape
- exp/hubert_iter0_stats_raw/train/text_shape.word
valid_shape_file:
- exp/hubert_iter0_stats_raw/valid/speech_shape
- exp/hubert_iter0_stats_raw/valid/text_shape.word
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 400
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_960/wav.scp
- speech
- sound
- - dump/raw/train_960/text.km.kmeans_iter0_mfcc_train_960_portion0.1
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text.km.kmeans_iter0_mfcc_train_960_portion0.1
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0005
scheduler: warmuplr
scheduler_conf:
warmup_steps: 32000
token_list:
- '81'
- '5'
- '79'
- '84'
- '27'
- '35'
- '67'
- '56'
- '10'
- '99'
- '24'
- '3'
- '48'
- '8'
- '42'
- '16'
- '32'
- '31'
- '47'
- '43'
- '20'
- '73'
- '49'
- '86'
- '18'
- '64'
- '34'
- '59'
- '95'
- '0'
- '52'
- '44'
- '61'
- '57'
- '30'
- '1'
- '93'
- '6'
- '69'
- '19'
- '7'
- '65'
- '28'
- '89'
- '2'
- '96'
- '91'
- '72'
- '38'
- '78'
- '26'
- '13'
- '39'
- '94'
- '4'
- '88'
- '85'
- '51'
- '82'
- '41'
- '50'
- '21'
- '80'
- '97'
- '87'
- '25'
- '54'
- '12'
- '40'
- '60'
- '29'
- '11'
- '53'
- '71'
- '83'
- '74'
- '68'
- '55'
- '62'
- '76'
- '45'
- '75'
- '92'
- '46'
- '36'
- '66'
- '22'
- '77'
- '23'
- '63'
- '37'
- '58'
- '33'
- '15'
- '17'
- '90'
- '98'
- '14'
- '70'
- '9'
- <unk>
- <sos/eos>
init: null
collate_fn_conf:
label_downsampling: 2
pad: false
rand_crop: true
input_size: 1
num_classes: 100
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
pred_masked_weight: 1.0
pred_nomask_weight: 0.0
loss_weights: 0.0
frontend: null
frontend_conf: {}
specaug: null
specaug_conf: {}
normalize: null
normalize_conf: {}
preencoder: null
preencoder_conf: {}
encoder: torchaudio_hubert
encoder_conf:
encoder_projection_dropout: 0.1
encoder_attention_dropout: 0.1
encoder_ff_interm_dropout: 0.0
encoder_dropout: 0.1
encoder_layer_drop: 0.05
model: torchaudio
model_conf: {}
required:
- output_dir
- token_list
version: '202209'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-ner | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 81 | null | ---
tags:
- generated_from_trainer
model-index:
- name: REPEAT622_2wangchanberta-base-att-spm-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# REPEAT622_2wangchanberta-base-att-spm-uncased
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.3712 | 1.0 | 9531 | 0.7093 |
| 0.1678 | 2.0 | 19062 | 1.0832 |
| 0.1362 | 3.0 | 28593 | 1.0609 |
| 0.1211 | 4.0 | 38124 | 1.3014 |
| 0.1076 | 5.0 | 47655 | 1.4995 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.13.0+cu116
- Datasets 1.17.0
- Tokenizers 0.10.3
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-xnli | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- ga
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- si
- sk
- sl
- so
- sq
- sr
- sv
- sw
- ta
- te
- th
- tl
- tr
- uk
- ur
- uz
- vi
- zh
license: mit
---
# xmod-base
X-MOD is a multilingual masked language model trained on filtered CommonCrawl data containing 81 languages. It was introduced in the paper [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255) (Pfeiffer et al., NAACL 2022) and first released in [this repository](https://github.com/facebookresearch/fairseq/tree/main/examples/xmod).
Because it has been pre-trained with language-specific modular components (_language adapters_), X-MOD differs from previous multilingual models like [XLM-R](https://huggingface.co/xlm-roberta-base). For fine-tuning, the language adapters in each transformer layer are frozen.
# Usage
## Tokenizer
This model reuses the tokenizer of [XLM-R](https://huggingface.co/xlm-roberta-base), so you can load the tokenizer as follows:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
```
## Input Language
Because this model uses language adapters, you need to specify the language of your input so that the correct adapter can be activated:
```python
from transformers import XmodModel
model = XmodModel.from_pretrained("jvamvas/xmod-base")
model.set_default_language("en_XX")
```
A directory of the language adapters in this model is found at the bottom of this model card.
## Fine-tuning
In the experiments in the original paper, the embedding layer and the language adapters are frozen during fine-tuning. A method for doing this is provided in the code:
```python
model.freeze_embeddings_and_language_adapters()
# Fine-tune the model ...
```
## Cross-lingual Transfer
After fine-tuning, zero-shot cross-lingual transfer can be tested by activating the language adapter of the target language:
```python
model.set_default_language("de_DE")
# Evaluate the model on German examples ...
```
# Bias, Risks, and Limitations
Please refer to the model card of [XLM-R](https://huggingface.co/xlm-roberta-base), because X-MOD has a similar architecture and has been trained on similar training data.
# Citation
**BibTeX:**
```bibtex
@inproceedings{pfeiffer-etal-2022-lifting,
title = "Lifting the Curse of Multilinguality by Pre-training Modular Transformers",
author = "Pfeiffer, Jonas and
Goyal, Naman and
Lin, Xi and
Li, Xian and
Cross, James and
Riedel, Sebastian and
Artetxe, Mikel",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.255",
doi = "10.18653/v1/2022.naacl-main.255",
pages = "3479--3495"
}
```
# Languages
This model contains the following language adapters:
| lang_id (Adapter index) | Language code | Language |
|-------------------------|---------------|-----------------------|
| 0 | en_XX | English |
| 1 | id_ID | Indonesian |
| 2 | vi_VN | Vietnamese |
| 3 | ru_RU | Russian |
| 4 | fa_IR | Persian |
| 5 | sv_SE | Swedish |
| 6 | ja_XX | Japanese |
| 7 | fr_XX | French |
| 8 | de_DE | German |
| 9 | ro_RO | Romanian |
| 10 | ko_KR | Korean |
| 11 | hu_HU | Hungarian |
| 12 | es_XX | Spanish |
| 13 | fi_FI | Finnish |
| 14 | uk_UA | Ukrainian |
| 15 | da_DK | Danish |
| 16 | pt_XX | Portuguese |
| 17 | no_XX | Norwegian |
| 18 | th_TH | Thai |
| 19 | pl_PL | Polish |
| 20 | bg_BG | Bulgarian |
| 21 | nl_XX | Dutch |
| 22 | zh_CN | Chinese (simplified) |
| 23 | he_IL | Hebrew |
| 24 | el_GR | Greek |
| 25 | it_IT | Italian |
| 26 | sk_SK | Slovak |
| 27 | hr_HR | Croatian |
| 28 | tr_TR | Turkish |
| 29 | ar_AR | Arabic |
| 30 | cs_CZ | Czech |
| 31 | lt_LT | Lithuanian |
| 32 | hi_IN | Hindi |
| 33 | zh_TW | Chinese (traditional) |
| 34 | ca_ES | Catalan |
| 35 | ms_MY | Malay |
| 36 | sl_SI | Slovenian |
| 37 | lv_LV | Latvian |
| 38 | ta_IN | Tamil |
| 39 | bn_IN | Bengali |
| 40 | et_EE | Estonian |
| 41 | az_AZ | Azerbaijani |
| 42 | sq_AL | Albanian |
| 43 | sr_RS | Serbian |
| 44 | kk_KZ | Kazakh |
| 45 | ka_GE | Georgian |
| 46 | tl_XX | Tagalog |
| 47 | ur_PK | Urdu |
| 48 | is_IS | Icelandic |
| 49 | hy_AM | Armenian |
| 50 | ml_IN | Malayalam |
| 51 | mk_MK | Macedonian |
| 52 | be_BY | Belarusian |
| 53 | la_VA | Latin |
| 54 | te_IN | Telugu |
| 55 | eu_ES | Basque |
| 56 | gl_ES | Galician |
| 57 | mn_MN | Mongolian |
| 58 | kn_IN | Kannada |
| 59 | ne_NP | Nepali |
| 60 | sw_KE | Swahili |
| 61 | si_LK | Sinhala |
| 62 | mr_IN | Marathi |
| 63 | af_ZA | Afrikaans |
| 64 | gu_IN | Gujarati |
| 65 | cy_GB | Welsh |
| 66 | eo_EO | Esperanto |
| 67 | km_KH | Central Khmer |
| 68 | ky_KG | Kirghiz |
| 69 | uz_UZ | Uzbek |
| 70 | ps_AF | Pashto |
| 71 | pa_IN | Punjabi |
| 72 | ga_IE | Irish |
| 73 | ha_NG | Hausa |
| 74 | am_ET | Amharic |
| 75 | lo_LA | Lao |
| 76 | ku_TR | Kurdish |
| 77 | so_SO | Somali |
| 78 | my_MM | Burmese |
| 79 | or_IN | Oriya |
| 80 | sa_IN | Sanskrit |
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-mldoc | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 39 | null | ---
language:
- multilingual
- ar
- cs
- en
- eu
- fi
- fr
- hi
- hr
- hu
- hy
- id
- it
- ka
- ko
- lt
- ml
- mn
- ms
- pl
- ro
- ru
- si
- sk
- sq
- sv
- sw
- ta
- th
- tl
- vi
license: mit
---
An X-MOD model of size *base* trained on 30 languages for 125k update steps.
See https://huggingface.co/jvamvas/xmod-base for details. |
dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: openrail++
language:
- en
- ja
- es
- zh
widget:
- text: environment art
example_title: Concept Art 1
- text: environment concept art
example_title: Concept Art 2
- text: environment,landscape, wallpaper
example_title: Concept Art 3
- text: a beautiful illustration of a fantasy forest
example_title: Fantasy Forest
tags:
- stable-diffusion
- sygil-diffusion
- text-to-image
- sygil-devs
- finetune
- stable-diffusion-1.5
inference: true
pinned: true
metrics:
- accuracy
- bertscore
- bleu
- bleurt
- brier_score
- cer
- character
- charcut_mt
- chrf
- code_eval
---
# About the model
-----------------
This model is a fine-tune of Stable Diffusion, trained on the [Imaginary Network Expanded Dataset](https://github.com/Sygil-Dev/INE-dataset), with the big advantage of allowing the use of multiple namespaces (labeled tags) to control various parts of the final generation.
While current models usually are prone to “context errors” and need substantial negative prompting to set them on the right track, the use of namespaces in this model (eg. “species:seal” or “studio:dc”) stop the model from misinterpreting a seal as the singer Seal, or DC Comics as Washington DC.
This model is also able to understand other languages besides English, currently it can partially understand prompts in Chinese, Japanese and Spanish. More training is already being done in order to have the model completely understand those languages and have it work just like how it works with English prompts.
As the model is fine-tuned on a wide variety of content, it’s able to generate many types of images and compositions, and easily outperforms the original model when it comes to portraits, architecture, reflections, fantasy, concept art, anime, landscapes and a lot more without being hyper-specialized like other community fine-tunes that are currently available.
**Note: The prompt engineering techniques needed are slightly different from other fine-tunes and the original Stable Diffusion model, so while you can still use your favorite prompts, for best results you might need to tweak them to make use of namespaces. A more detailed guide will be available later on, but you can use the tags and namespaces found here [Dataset Explorer](https://huggingface.co/spaces/Sygil/INE-dataset-explorer) should be able to start you off on the right track.
If you find my work useful, please consider supporting me on [GitHub Sponsors](https://github.com/sponsors/ZeroCool940711)!
This model is still in its infancy and it's meant to be constantly updated and trained with more and more data as time goes by, so feel free to give us feedback on our [Discord Server](https://discord.gg/UjXFsf6mTu) or on the discussions section on huggingface. We plan to improve it with more, better tags in the future, so any help is always welcome 😛
[](https://discord.gg/UjXFsf6mTu)
# Showcase

## Examples
Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Sygil Diffusion in a simple and efficient manner.
```bash
pip install diffusers transformers accelerate scipy safetensors
```
Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to DPMSolverMultistepScheduler):
```python
import torch
from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler
model_id = "Sygil/Sygil-Diffusion"
# Use the DPMSolverMultistepScheduler (DPM-Solver++) scheduler here instead
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "a beautiful illustration of a fantasy forest"
image = pipe(prompt).images[0]
image.save("fantasy_forest_illustration.png")
```
**Notes**:
- Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance)
- If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed).
## Available Checkpoints:
- #### Stable:
- [Sygil Diffusion v0.1](https://huggingface.co/Sygil/Sygil-Diffusion/blob/main/sygil-diffusion-v0.1.ckpt): Trained on Stable Diffusion 1.5 for 800,000 steps.
- [Sygil Diffusion v0.2](https://huggingface.co/Sygil/Sygil-Diffusion/blob/main/sygil-diffusion-v0.2.ckpt): Resumed from Sygil Diffusion v0.1 and trained for a total of 1.77 million steps.
- [Sygil Diffusion v0.3](https://huggingface.co/Sygil/Sygil-Diffusion/blob/main/sygil-diffusion-v0.3.ckpt): Resumed from Sygil Diffusion v0.2 and trained for a total of 2.01 million steps.
- [Sygil Diffusion v0.4](https://huggingface.co/Sygil/Sygil-Diffusion/blob/main/sygil-diffusion-v0.4.ckpt): Resumed from Sygil Diffusion v0.3 and trained for a total of 2.37 million steps.
- #### Beta:
- No active beta right now.
Note: Checkpoints under the Beta section are updated daily or at least 3-4 times a week. This is usually the equivalent of 1-2 training session,
this is done until they are stable enough to be moved into a proper release, usually every 1 or 2 weeks.
While the beta checkpoints can be used as they are only the latest version is kept on the repo and the older checkpoints are removed when a new one
is uploaded to keep the repo clean. The HuggingFace inference API as well as the diffusers library will always use the latest beta checkpoint in the diffusers format.
For special cases we might make additional repositories to keep a copy of the diffusers model like when a model uses a different Stable Diffusion model as base (eg. Stable Diffusion 1.5 vs 2.1).
## Training
**Training Data**:
The model was trained on the following dataset:
- [Imaginary Network Expanded Dataset](https://github.com/Sygil-Dev/INE-dataset) dataset.
**Hardware and others**
- **Hardware:** 1 x Nvidia RTX 3050 8GB GPU
- **Hours Trained:** 857 hours approximately.
- **Optimizer:** AdamW
- **Adam Beta 1**: 0.9
- **Adam Beta 2**: 0.999
- **Adam Weight Decay**: 0.01
- **Adam Epsilon**: 1e-8
- **Gradient Checkpointing**: True
- **Gradient Accumulations**: 400
- **Batch:** 1
- **Learning Rate:** 1e-7
- **Learning Rate Scheduler:** cosine_with_restarts
- **Learning Rate Warmup Steps:** 10,000
- **Lora unet Learning Rate**: 1e-7
- **Lora Text Encoder Learning Rate**: 1e-7
- **Resolution**: 512 pixels
- **Total Training Steps:** 2,370,200
Note: For the learning rate I'm testing something new, after changing from using the `constant` scheduler to `cosine_with_restarts` after v0.3 was released, I noticed
it practically uses the optimal learning rate while trying to minimize the loss value, so, when every training session finishes I use for the next session the latest
learning rate value shown for the last few steps from the last session, this makes it so it will overtime decrease at a constant rate. When I add a lot of data to the training dataset
at once, I move the learning rate back to 1e-7 which then the scheduler will move down again as it learns more from the new data, this makes it so the training
doesn't overfit or uses a learning rate too low that makes the model not learn anything new for a while.
Developed by: [ZeroCool94](https://github.com/ZeroCool940711) at [Sygil-Dev](https://github.com/Sygil-Dev/)
## Community Contributions:
- [Kevin Turner (keturn)](https://huggingface.co/keturn): created the [INE-dataset-explorer](https://huggingface.co/spaces/Sygil/INE-dataset-explorer) space for better browsing of the INE dataset.
*This model card is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
# License
This model is open access and available to all, with a CreativeML Open RAIL++-M License further specifying rights and usage. [Please read the full license here](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL) |
Charlotte77/model_test | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-sequence
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-sequence
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9874
- eval_runtime: 18.3744
- eval_samples_per_second: 561.433
- eval_steps_per_second: 70.206
- epoch: 2.24
- step: 52015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Chertilasus/main | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- wildcard
widget:
- text: a photo of ancma map of beautiful flower garden.
---
## Description
This is a Stable Diffusion model fine-tuned on a 100 ancient/old maps for the DreamBooth Hackathon 🔥 wildcard theme. To participate or learn more, visit [this page](https://huggingface.co/dreambooth-hackathon).
To generate ancient/old maps, use **a photo of ancma map of [your choice]**. Modifiers and negative prompts may improve results. The model is not limited to classic geography, you can try gardens, cave systems, cities, planets, zodiac charts, etc.
## Examples
*a photo of ancma map of fiery volcano island.*

*a photo of ancma map of peaceful Swiss town near a lake.*

*a photo of ancma map of giant ant colony.*

*a photo of ancma map of beautiful flower garden.*

## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('baruga/ancient-maps')
image = pipeline().images[0]
image
```
|
ChristianOrr/madnet_keras | [
"tensorboard",
"dataset:flyingthings-3d",
"dataset:kitti",
"arxiv:1810.05424",
"vision",
"deep-stereo",
"depth-estimation",
"Tensorflow2",
"Keras",
"license:apache-2.0"
]
| depth-estimation | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-12-31T19:16:09Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="wpbsball12/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Chun/w-en2zh-hsk | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
tags:
- "peak finding"
- "synchrotron"
- "hedm"
license: "mit"
metrics:
- "mse"
---
BraggNN is a 2D peak finding NN for use in identifying peak centers |
Chun/w-en2zh-mtm | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-12-31T19:30:02Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi_v3_1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Glen/taxi_v3_1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Cloudy/DialoGPT-CJ-large | [
"pytorch",
"conversational"
]
| conversational | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2022-12-31T23:18:37Z | # QCPG++
```
Dataset: ParaBank2.0
Learning Rate: 1e-4
```
## Text Diversity Metrics
```
Semantic Similarity: DocumentSemanticDiversity
Syntactic Diversity: DependencyDiversity
Lexical Diversity: Character-level edit distance
Phonological Diversity: RhythmicDiversity
Morphological Diversity: POSSequenceDiversity.
```
## Results
```
Training Loss: 0.4351
Dev Loss: 1.0986
Dev BLEU: 34.3115
```
|
CoderBoy432/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | 2023-01-01T00:19:09Z |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.6
inference: false
datasets:
- keremberke/blood-cell-object-detection
model-index:
- name: keremberke/yolov5s-blood-cell
results:
- task:
type: object-detection
dataset:
type: keremberke/blood-cell-object-detection
name: keremberke/blood-cell-object-detection
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.9022929540677422 # min: 0.0 - max: 1.0
name: [email protected]
---
<div align="center">
<img width="640" alt="keremberke/yolov5s-blood-cell" src="https://huggingface.co/keremberke/yolov5s-blood-cell/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5s-blood-cell')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5s-blood-cell --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)** |
CogComp/roberta-temporal-predictor | [
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.00436",
"transformers",
"license:mit",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### yyaaeell Dreambooth model trained by Brainergy with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
CohleM/bert-nepali-tokenizer | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Breakout-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Breakout-v5
type: Breakout-v5
metrics:
- type: mean_reward
value: 539.90 +/- 185.35
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Breakout-v5**
This is a trained model of a PPO agent playing Breakout-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_atari_envpool_xla_jax_scan.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ppo_atari_envpool_xla_jax_scan]"
python -m cleanrl_utils.enjoy --exp-name ppo_atari_envpool_xla_jax_scan --env-id Breakout-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Breakout-v5-ppo_atari_envpool_xla_jax_scan-seed1/raw/main/ppo_atari_envpool_xla_jax_scan.py
curl -OL https://huggingface.co/cleanrl/Breakout-v5-ppo_atari_envpool_xla_jax_scan-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Breakout-v5-ppo_atari_envpool_xla_jax_scan-seed1/raw/main/poetry.lock
poetry install --all-extras
python ppo_atari_envpool_xla_jax_scan.py --track --save-model --upload-model --hf-entity cleanrl --env-id Breakout-v5 --seed 1
```
# Hyperparameters
```python
{'anneal_lr': True,
'batch_size': 1024,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Breakout-v5',
'exp_name': 'ppo_atari_envpool_xla_jax_scan',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 256,
'norm_adv': True,
'num_envs': 8,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 9765,
'save_model': True,
'seed': 1,
'target_kl': None,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
Cool/Demo | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
- diffusers
- Pop Art
- Roy Lichenstein
- 1970s
inference: true
---
## Roy PopArt Diffusion
This is a SD1.5 model trained on pop arts made by the one and only Roy Lichenstein (And some other pop arts).
The model can only really do portraits, and even then, to get decent looking results, you do have to tinker with the prompt/create more samples.
Occasionally, it still makes realistic looking people with pop art background. For the time being, the best thing you can do is to simply adjust the prompt and/or rerun it.
There are definitely a lot of things I can do to improve the model, but I haven't got around to doing that.
Please check out important informations on the usage of the model down bellow.
To reference the art style, use the token: roypop style
There is already an existing model that uses textual inversion. This is trained using Dreambooth instead, whether or not this method is better, I will let you judge.
### Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run Roy_PopArt_Diffusion:
[](https://huggingface.co/spaces/ItsJayQz/Roy_PopArt_Diffusion)
**Portraits**






Prompt used:
portrait of *name* in roypop style, digital art, trending on artstation, highly detailed, fine detail, intricate
negative: cartoon, 3d, ((disfigured)), ((bad art)), ((deformed)), ((poorly drawn)), ((extra limbs)), blurry
Guidance: 8
Steps: 50 using DDIM
I'm not a prompt wizard so you can probably get better results with some tuning.
**Disclaimers**
- This was created entirely for research, and entertainment purpose.
- I did not plan, or is planning on turning this model into a commercial product, or use for commercial purposes.
- I do not condone the usage of the model for making counterfeit products.
**License**
- This model is under Creative OpenRAIL-M.
- This means the model can be used royalty-free, and flexible with the model usage, such as redistribution of the model, or of any derivatives of the model.
- However, there are restrictions on the openess of the license.
More info into the restrictions can be found [here](https://huggingface.co/spaces/CompVis/stable-diffusion-license).
**Responsibilities**
- By using/downloading the model, you are responsible for:
- All outputs/usage of the model.
- Understanding the Disclaimers.
- Upholding the terms of the license.
Thanks for checking out the model! |
CouchCat/ma_ner_v6_distil | [
"pytorch",
"distilbert",
"token-classification",
"en",
"transformers",
"ner",
"license:mit",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.89 +/- 18.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Craftified/Bob | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: opt-350m_mle_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m_mle_v2
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
CrayonShinchan/fine_tune_try_1 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: cc-by-nc-sa-4.0
language:
- en
thumbnail: "https://huggingface.co/GeneralAwareness/Bettermaker/resolve/main/tmped0j_g4y.png"
tags:
- stable-diffusion
- v2
- text-to-image
- image-to-image
- Embedding
---
Textual Inversion Embedding by General Awareness For SD 2.x trained on 768x768 images from various sources.
Install by downloading the .pt embedding, and put it in the \embeddings folder.
This embedding was made to be primarily used as a negative prompt (with your other negatives as well) as it changes, or refines, the final image, but in all the testing that I did, as well as others, the final images were just better.
---
Usage: In the negative prompt just add Bettermaker with the rest that you currently use. If you do use it with your positive prompt call it as such - image in Bettermaker style, Bettermaker style, Bettermaker, in the style of Bettermaker, or by Bettermaker.
---
scary clown image in vint-3000 style, highly detailed, 8 k, hdr, smooth, sharp focus, high resolution, award - winning photo, (selective color red eyes)

Using the embedding in the negative prompt of the above

Adding the embedding (Bettermaker) at the end. Using the other calling forms results in differences from wild to unusable.

Not using this embedding.

Using this embedding in the negative prompt.

Adding my nirphoto-3000 with this embedding in the negative as well.
 |
Crispy/dialopt-small-kratos | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-01T05:41:06Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: markafitzgerald1/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Crystal/distilbert-base-uncased-finetuned-squad | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- music
--- |
Culmenus/IceBERT-finetuned-ner | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:gpl-3.0",
"model-index",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2023-01-01T06:00:21Z | ---
language:
- ta
license: apache-2.0
tags:
- whisper-event
metrics:
- wer
model-index:
- name: Whisper Tamil Small - Vasista Sai Lodagala
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: google/fleurs
type: google/fleurs
config: ta_in
split: test
metrics:
- type: wer
value: 9.11
name: WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: ta
split: test
metrics:
- type: wer
value: 7.95
name: WER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tamil Small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Tamil data available from multiple publicly available ASR corpuses.
It has been fine-tuned as a part of the Whisper fine-tuning sprint.
**NOTE:** The code used to train this model is available for re-use in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository.
## Usage
In order to evaluate this model on an entire dataset, the evaluation codes available in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository can be used.
The same repository also provides the scripts for faster inference using whisper-jax.
In order to infer a single audio file using this model, the following code snippet can be used:
```python
>>> import torch
>>> from transformers import pipeline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> transcribe = pipeline(task="automatic-speech-recognition", model="vasista22/whisper-tamil-small", chunk_length_s=30, device=device)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="ta", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
For faster inference of whisper models, the [whisper-jax](https://github.com/sanchit-gandhi/whisper-jax) library can be used. Please follow the necessary installation steps as mentioned [here](https://github.com/vasistalodagala/whisper-finetune#faster-evaluation-with-whisper-jax), before using the following code snippet:
```python
>>> import jax.numpy as jnp
>>> from whisper_jax import FlaxWhisperForConditionalGeneration, FlaxWhisperPipline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> transcribe = FlaxWhisperPipline("vasista22/whisper-tamil-small", batch_size=16)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="ta", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
## Training and evaluation data
Training Data:
- [IISc-MILE Tamil ASR Corpus](https://www.openslr.org/127/)
- [ULCA ASR Corpus](https://github.com/Open-Speech-EkStep/ULCA-asr-dataset-corpus#tamil-labelled--total-duration-is-116024-hours)
- [Shrutilipi ASR Corpus](https://ai4bharat.org/shrutilipi)
- [Microsoft Speech Corpus (Indian Languages)](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e)
- [Google/Fleurs Train+Dev set](https://huggingface.co/datasets/google/fleurs)
- Babel ASR Corpus
Evaluation Data:
- [Microsoft Speech Corpus (Indian Languages) Test Set](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e)
- [Google/Fleurs Test Set](https://huggingface.co/datasets/google/fleurs)
- [IISc-MILE Test Set](https://www.openslr.org/127/)
- Babel Test Set
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.7e-05
- train_batch_size: 48
- eval_batch_size: 32
- seed: 22
- optimizer: adamw_bnb_8bit
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 17500
- training_steps: 29659 (Initially set to 84740 steps)
- mixed_precision_training: True
## Acknowledgement
This work was done at [Speech Lab, IIT Madras](https://asr.iitm.ac.in/).
The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India. |
Culmenus/checkpoint-168500-finetuned-de-to-is_nr2 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-01T06:01:39Z |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.6
inference: false
datasets:
- keremberke/license-plate-object-detection
model-index:
- name: keremberke/yolov5m-license-plate
results:
- task:
type: object-detection
dataset:
type: keremberke/license-plate-object-detection
name: keremberke/license-plate-object-detection
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.9882982754936463 # min: 0.0 - max: 1.0
name: [email protected]
---
<div align="center">
<img width="640" alt="keremberke/yolov5m-license-plate" src="https://huggingface.co/keremberke/yolov5m-license-plate/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5m-license-plate')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5m-license-plate --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)** |
Culmenus/opus-mt-de-is-finetuned-de-to-is_ancc | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-01T06:24:43Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 734.00 +/- 185.15
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DanGalt -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DanGalt -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga DanGalt
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 120000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.00015),
('learning_starts', 100000),
('n_timesteps', 1500000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_nr2 | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: QRDQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 731.00 +/- 281.95
name: mean_reward
verified: false
---
# **QRDQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **QRDQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga akgeni -f logs/
python enjoy.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga akgeni -f logs/
rl_zoo3 enjoy --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga akgeni
```
## Hyperparameters
```python
OrderedDict([('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_fraction', 0.025),
('frame_stack', 4),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('replay_buffer_kwargs', 'dict(handle_timeout_termination=False)'),
('normalize', False)])
```
|
CuongLD/wav2vec2-large-xlsr-vietnamese | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"vi",
"dataset:common_voice, infore_25h",
"arxiv:2006.11477",
"arxiv:2006.13979",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2023-01-01T06:42:12Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('konishino/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
D4RL1NG/yes | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-01T08:51:11Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="SU-ZJU/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DJSammy/bert-base-swedish-uncased_BotXO-ai | [
"pytorch",
"transformers"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2023-01-01T09:32:39Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: saharM/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DJStomp/TestingSalvoNET | [
"transformers"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
tags:
- autotrain
- translation
language:
- en
- es
datasets:
- Tritkoman/autotrain-data-jqqjjqjo9jqjqj
co2_eq_emissions:
emissions: 60.07025137574803
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 2684380215
- CO2 Emissions (in grams): 60.0703
## Validation Metrics
- Loss: 2.663
- SacreBLEU: 7.486
- Gen len: 12.545 |
DKpro000/DialoGPT-small-harrypotter | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.6
inference: false
datasets:
- keremberke/forklift-object-detection
model-index:
- name: keremberke/yolov5n-forklift
results:
- task:
type: object-detection
dataset:
type: keremberke/forklift-object-detection
name: keremberke/forklift-object-detection
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.7890013934578441 # min: 0.0 - max: 1.0
name: [email protected]
---
<div align="center">
<img width="640" alt="keremberke/yolov5n-forklift" src="https://huggingface.co/keremberke/yolov5n-forklift/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5n-forklift')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5n-forklift --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)** |
DSI/TweetBasedSA | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: doomfusion
---
### MF Doomfusion 1.5 Dreambooth model trained by koankoan with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
doomfusion (use that on your prompt)

|
DTAI-KULeuven/mbert-corona-tweets-belgium-topics | [
"pytorch",
"jax",
"bert",
"text-classification",
"multilingual",
"nl",
"fr",
"en",
"arxiv:2104.09947",
"transformers",
"Dutch",
"French",
"English",
"Tweets",
"Topic classification"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 167 | 2023-01-01T11:08:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
DTAI-KULeuven/robbertje-1-gb-merged | [
"pytorch",
"roberta",
"fill-mask",
"nl",
"dataset:oscar",
"dataset:oscar (NL)",
"dataset:dbrd",
"dataset:lassy-ud",
"dataset:europarl-mono",
"dataset:conll2002",
"arxiv:2101.05716",
"transformers",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"RobBERTje",
"license:mit",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2023-01-01T11:18:11Z |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.6
inference: false
datasets:
- keremberke/forklift-object-detection
model-index:
- name: keremberke/yolov5s-forklift
results:
- task:
type: object-detection
dataset:
type: keremberke/forklift-object-detection
name: keremberke/forklift-object-detection
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.8382598267226307 # min: 0.0 - max: 1.0
name: [email protected]
---
<div align="center">
<img width="640" alt="keremberke/yolov5s-forklift" src="https://huggingface.co/keremberke/yolov5s-forklift/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5s-forklift')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5s-forklift --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)**
|
alexandrainst/da-binary-emotion-classification-base | [
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"transformers",
"license:cc-by-sa-4.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,066 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: Pranavsk/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Daivakai/DialoGPT-small-saitama | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | Gazer model mixed using AnythingV3 and gazer only tagged images from danbooru
Dataset of 236 images with complete tag lists
alot of people seem to like gazers to I thought why not make this as gazer is quite specific the results get similar
More monster girl models on the way feel free to request your favs :)
Gazer: gazer, 1girl, black_hair, breasts, colored_sclera, colored_skin, cyclops, extra_eyes, full_body, grin, long_hair, looking_at_viewer, monster_girl, one-eyed, red_eyes, sharp_teeth, small_breasts, smile, teeth, tentacles yellow_sclera
 |
Daltcamalea01/Camaleaodalt | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-01T12:22:13Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.03 +/- 14.70
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Danbi/distilgpt2-finetuned-wikitext2 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-01T12:29:49Z | ---
tags:
- conversational
---
# Kirby DialoGPT Model |
Darren/darren | [
"pytorch"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-01T13:28:29Z | ---
license: apache-2.0
---
# NB-ROBERTA Training Code
This is the current training code for the planned nb-roberta models.
We are currently planning to run the following experiments:
<table>
<tr>
<td><strong>Name</strong>
</td>
<td><strong>nb-roberta-base-old (C)</strong>
</td>
</tr>
<tr>
<td>Corpus
</td>
<td>NbAiLab/nb_bert
</td>
</tr>
<tr>
<td>Pod size
</td>
<td>v4-64
</td>
</tr>
<tr>
<td>Batch size
</td>
<td>62*4*8 = 1984 = 2k
</td>
</tr>
<tr>
<td>Learning rate
</td>
<td>3e-4 (RoBERTa article is using 6e-4 and bs=8k)
</td>
</tr>
<tr>
<td>Number of steps
</td>
<td>250k
</td>
</tr>
</table>
<table>
<tr>
<td><strong>Name</strong>
</td>
<td><strong>nb-roberta-base-ext (B)</strong>
</td>
</tr>
<tr>
<td>Corpus
</td>
<td>NbAiLab/nbailab_extended
</td>
</tr>
<tr>
<td>Pod size
</td>
<td>v4-64
</td>
</tr>
<tr>
<td>Batch size
</td>
<td>62*4*8 = 1984 = 2k
</td>
</tr>
<tr>
<td>Learning rate
</td>
<td>3e-4 (RoBERTa article is using 6e-4 and bs=8k)
</td>
</tr>
<tr>
<td>Number of steps
</td>
<td>250k
</td>
</tr>
</table>
<table>
<tr>
<td><strong>Name</strong>
</td>
<td><strong>nb-roberta-large-ext</strong>
</td>
</tr>
<tr>
<td>Corpus
</td>
<td>NbAiLab/nbailab_extended
</td>
</tr>
<tr>
<td>Pod size
</td>
<td>v4-64
</td>
</tr>
<tr>
<td>Batch size
</td>
<td>32*4*8 = 2024 = 1k
</td>
</tr>
<tr>
<td>Learning rate
</td>
<td>2-e4 (RoBERTa article is using 4e-4 and bs=8k)
</td>
</tr>
<tr>
<td>Number of steps
</td>
<td>500k
</td>
</tr>
</table>
<table>
<tr>
<td><strong>Name</strong>
</td>
<td><strong>nb-roberta-base-scandi</strong>
</td>
</tr>
<tr>
<td>Corpus
</td>
<td>NbAiLab/scandinavian
</td>
</tr>
<tr>
<td>Pod size
</td>
<td>v4-64
</td>
</tr>
<tr>
<td>Batch size
</td>
<td>62*4*8 = 1984 = 2k
</td>
</tr>
<tr>
<td>Learning rate
</td>
<td>3e-4 (RoBERTa article is using 6e-4 and bs=8k)
</td>
</tr>
<tr>
<td>Number of steps
</td>
<td>250k
</td>
</tr>
</table>
<table>
<tr>
<td><strong>Name</strong>
</td>
<td><strong>nb-roberta-large-scandi</strong>
</td>
</tr>
<tr>
<td>Corpus
</td>
<td>NbAiLab/scandinavian
</td>
</tr>
<tr>
<td>Pod size
</td>
<td>v4-64
</td>
</tr>
<tr>
<td>Batch size
</td>
<td>32*4*8 = 1024 = 1k
</td>
</tr>
<tr>
<td>Learning rate
</td>
<td>2-e4 (RoBERTa article is using 4e-4 and bs=8k)
</td>
</tr>
<tr>
<td>Number of steps
</td>
<td>500k
</td>
</tr>
</table>
## Calculations
Some basic that we used when estimating the number of training steps:
* The Scandinavic Corpus is 85GB
* The Scandinavic Corpus contains 13B words
* With a conversion factor of 2.3, this is estimated to around 30B tokens
* 30B tokens / (512 seq length * 3000 batch size) = 20.000 steps
|
DataikuNLP/TinyBERT_General_4L_312D | [
"pytorch",
"jax",
"bert",
"arxiv:1909.10351",
"transformers"
]
| null | {
"architectures": null,
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 74 | 2023-01-01T13:37:10Z | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
inference: false
license: mit
---
# Its Calling (Mob Umamusume) on Waifu Diffusion v1.3.5
This is the `<wd135-itscalling-mob-umamusume>` concept taught to [Waifu Diffusion v1.3.5](https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/models/wd-1-3-5_80000-fp32.ckpt) via Textual Inversion.
## Credits
The training images were selectively taken from [Pixiv](https://www.pixiv.net), [Twitter](https://twitter.com), and in-game screenshots of Uma Musume Pretty Derby.
A CSV file describing the original sources for most images is available in the [raw dataset archive file](./datasets/raw.7z).
## Input
Here is the new concept you will be able to use as an `object`:





## Output Examples
Some images that can be possibly generated by using the new concept:
!["<wd135-itscalling-mob-umamusume>, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn]" -s 64 -S 3505534900 -W 512 -H 768 -C 10 -A k_dpmpp_2](./examples/000013.63c4d22c.3505534900.png)
```json
{
"model": "stable diffusion",
"model_weights": "waifu-diffusion-1.3.5",
"model_hash": "b438efac4434af4e482d20cdfcea64067f8dfec438628261d2f2aa60ffc41452",
"app_id": "invoke-ai/InvokeAI",
"app_version": "2.2.4",
"image": {
"prompt": [
{
"prompt": "<wd135-itscalling-mob-umamusume>, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn]",
"weight": 1
}
],
"steps": 64,
"cfg_scale": 10,
"threshold": 0,
"perlin": 0,
"height": 768,
"width": 512,
"seed": 3505534900,
"seamless": false,
"hires_fix": false,
"type": "txt2img",
"postprocessing": null,
"sampler": "k_dpmpp_2",
"variations": []
}
}
```
!["<wd135-itscalling-mob-umamusume> horse ears horse tail horse girl, running outdoors park, white t-shirts black shorts, morning sunlight, pov from side looking at viewer cowboy shot, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn]" -s 64 -S 821696414 -W 512 -H 768 -C 10 -A k_dpmpp_2](./examples/000019.37833118.821696414.png)
```json
{
"model": "stable diffusion",
"model_weights": "waifu-diffusion-1.3.5",
"model_hash": "b438efac4434af4e482d20cdfcea64067f8dfec438628261d2f2aa60ffc41452",
"app_id": "invoke-ai/InvokeAI",
"app_version": "2.2.4",
"image": {
"prompt": [
{
"prompt": "<wd135-itscalling-mob-umamusume> horse ears horse tail horse girl, running outdoors park, white t-shirts black shorts, morning sunlight, pov from side looking at viewer cowboy shot, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn]",
"weight": 1
}
],
"steps": 64,
"cfg_scale": 10,
"threshold": 0,
"perlin": 0,
"height": 768,
"width": 512,
"seed": 821696414,
"seamless": false,
"hires_fix": false,
"type": "txt2img",
"postprocessing": null,
"sampler": "k_dpmpp_2",
"variations": []
}
}
```
!["<wd135-itscalling-mob-umamusume> horse ears horse tail horse girl, running outdoors park, white t-shirts black shorts, morning sunlight, pov from side looking at viewer cowboy shot, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn]" -s 64 -S 460073536 -W 512 -H 768 -C 10 -A k_dpmpp_2](./examples/000020.58cf5625.460073536.png)
```json
{
"model": "stable diffusion",
"model_weights": "waifu-diffusion-1.3.5",
"model_hash": "b438efac4434af4e482d20cdfcea64067f8dfec438628261d2f2aa60ffc41452",
"app_id": "invoke-ai/InvokeAI",
"app_version": "2.2.4",
"image": {
"prompt": [
{
"prompt": "<wd135-itscalling-mob-umamusume> horse ears horse tail horse girl, running outdoors park, white t-shirts black shorts, morning sunlight, pov from side looking at viewer cowboy shot, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn]",
"weight": 1
}
],
"steps": 64,
"cfg_scale": 10,
"threshold": 0,
"perlin": 0,
"height": 768,
"width": 512,
"seed": 460073536,
"seamless": false,
"hires_fix": false,
"type": "txt2img",
"postprocessing": null,
"sampler": "k_dpmpp_2",
"variations": []
}
}
```
!["<wd135-itscalling-mob-umamusume> horse ears horse tail horse girl, school sailor uniform white shirt purple pleated skirt, standing looking at viewer smile one eye closed arms behind back, standing indoors empty classroom, dusk sunset ambience light, full body shot, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn]" -s 64 -S 1869090925 -W 512 -H 768 -C 10 -A k_dpmpp_2](./examples/000032.f35340f2.1869090925.png)
```json
{
"model": "stable diffusion",
"model_weights": "waifu-diffusion-1.3.5",
"model_hash": "b438efac4434af4e482d20cdfcea64067f8dfec438628261d2f2aa60ffc41452",
"app_id": "invoke-ai/InvokeAI",
"app_version": "2.2.4",
"image": {
"prompt": [
{
"prompt": "<wd135-itscalling-mob-umamusume> horse ears horse tail horse girl, school sailor uniform white shirt purple pleated skirt, standing looking at viewer smile one eye closed arms behind back, standing indoors empty classroom, dusk sunset ambience light, full body shot, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn]",
"weight": 1
}
],
"steps": 64,
"cfg_scale": 10,
"threshold": 0,
"perlin": 0,
"height": 768,
"width": 512,
"seed": 1869090925,
"seamless": false,
"hires_fix": false,
"type": "txt2img",
"postprocessing": null,
"sampler": "k_dpmpp_2",
"variations": []
}
}
```
## License
[MIT](./LICENSE). |
DataikuNLP/paraphrase-albert-small-v2 | [
"pytorch",
"albert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
]
| sentence-similarity | {
"architectures": [
"AlbertModel"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 628 | 2023-01-01T13:56:28Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="glitchyordis/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DavidAMcIntosh/DialoGPT-small-rick | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-01T14:14:19Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.32 +/- 2.89
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jacobthebanana/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DavidAMcIntosh/small-rick | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: sdcid
---
###
Sample pictures of:
sdcid (use that on your prompt)

|
Davlan/bert-base-multilingual-cased-finetuned-kinyarwanda | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | 2023-01-01T14:44:56Z | ---
language:
- vi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: HuyenNguyen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HuyenNguyen
This model is a fine-tuned version of [Huyen2310/FPT-S15000](https://huggingface.co/Huyen2310/FPT-S15000) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 450
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Davlan/bert-base-multilingual-cased-finetuned-luganda | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: sentiment_analysis_of_tweets_on_covid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_analysis_of_tweets_on_covid
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6161
- eval_accuracy: 0.7635
- eval_runtime: 34.9554
- eval_samples_per_second: 57.216
- eval_steps_per_second: 7.152
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Davlan/bert-base-multilingual-cased-finetuned-luo | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-urdu-commonvoice-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-urdu-commonvoice-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.4217
- eval_wer: 0.7946
- eval_runtime: 53.8128
- eval_samples_per_second: 18.583
- eval_steps_per_second: 2.323
- epoch: 18.38
- step: 5000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.13.0+cu116
- Datasets 1.18.3
- Tokenizers 0.13.2
|
Davlan/byt5-base-yor-eng-mt | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="HamzaFarhan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Davlan/mt5-small-pcm-en | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### jjoonnaa Dreambooth model trained by Brainergy with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Davlan/mt5_base_yor_eng_mt | [
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 107.60 +/- 161.67
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Davlan/xlm-roberta-base-finetuned-amharic | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 401 | 2023-01-01T15:56:44Z | ---
license: apache-2.0
---
This is a blank model page that connects the following arxiv preprint:
An Explainable Machine Learning Approach to Visual-Interactive Labeling: A Case Study on Non-communicable Disease Data [[1]](https://arxiv.org/abs/2209.12778)[[2]](https://arxiv.org/abs/2209.12778v1).
Donlapark Ponnoprat, Parichart Pattarapanitchai, Phimphaka Taninpong, Suthep Suantai
to the demo of the official app **XLabel** at https://huggingface.co/spaces/Donlapark/XLabel. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.