modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Barytes/hellohf | [
"tf",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 51.40 +/- 35.91
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Batsy24/DialoGPT-small-Twilight_EdBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | Anime text-to-image model that focused on very vibrant and saturated images


 |
Battlehooks/distilbert-base-uncased-finetuned-squad | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.42 +/- 17.58
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BatuhanYilmaz/bert-finetuned-nerxD | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-17T06:25:27Z | ---
license: apache-2.0
datasets:
- squad_v2
language:
- en
metrics:
- squad_v2
pipeline_tag: question-answering
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
BatuhanYilmaz/dummy-model | [
"tf",
"camembert",
"fill-mask",
"transformers",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"CamembertForMaskedLM"
],
"model_type": "camembert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: FPT-P3-23000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FPT-P3-23000
This model is a fine-tuned version of [HuyenNguyen/FPT-P3-15000](https://huggingface.co/HuyenNguyen/FPT-P3-15000) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3727
- Wer: 31.4391
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2922 | 0.4 | 200 | 0.3976 | 31.9574 |
| 0.3445 | 0.8 | 400 | 0.3981 | 21.0180 |
| 0.2742 | 1.2 | 600 | 0.3975 | 31.7631 |
| 0.251 | 1.6 | 800 | 0.3820 | 31.2911 |
| 0.2222 | 2.0 | 1000 | 0.3727 | 31.4391 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
BatuhanYilmaz/dummy | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
BatuhanYilmaz/mlm-finetuned-imdb | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/pii-pile-chunk3-0-50000
- tomekkorbak/pii-pile-chunk3-50000-100000
- tomekkorbak/pii-pile-chunk3-100000-150000
- tomekkorbak/pii-pile-chunk3-150000-200000
- tomekkorbak/pii-pile-chunk3-200000-250000
- tomekkorbak/pii-pile-chunk3-250000-300000
- tomekkorbak/pii-pile-chunk3-300000-350000
- tomekkorbak/pii-pile-chunk3-350000-400000
- tomekkorbak/pii-pile-chunk3-400000-450000
- tomekkorbak/pii-pile-chunk3-450000-500000
- tomekkorbak/pii-pile-chunk3-500000-550000
- tomekkorbak/pii-pile-chunk3-550000-600000
- tomekkorbak/pii-pile-chunk3-600000-650000
- tomekkorbak/pii-pile-chunk3-650000-700000
- tomekkorbak/pii-pile-chunk3-700000-750000
- tomekkorbak/pii-pile-chunk3-750000-800000
- tomekkorbak/pii-pile-chunk3-800000-850000
- tomekkorbak/pii-pile-chunk3-850000-900000
- tomekkorbak/pii-pile-chunk3-900000-950000
- tomekkorbak/pii-pile-chunk3-950000-1000000
- tomekkorbak/pii-pile-chunk3-1000000-1050000
- tomekkorbak/pii-pile-chunk3-1050000-1100000
- tomekkorbak/pii-pile-chunk3-1100000-1150000
- tomekkorbak/pii-pile-chunk3-1150000-1200000
- tomekkorbak/pii-pile-chunk3-1200000-1250000
- tomekkorbak/pii-pile-chunk3-1250000-1300000
- tomekkorbak/pii-pile-chunk3-1300000-1350000
- tomekkorbak/pii-pile-chunk3-1350000-1400000
- tomekkorbak/pii-pile-chunk3-1400000-1450000
- tomekkorbak/pii-pile-chunk3-1450000-1500000
- tomekkorbak/pii-pile-chunk3-1500000-1550000
- tomekkorbak/pii-pile-chunk3-1550000-1600000
- tomekkorbak/pii-pile-chunk3-1600000-1650000
- tomekkorbak/pii-pile-chunk3-1650000-1700000
- tomekkorbak/pii-pile-chunk3-1700000-1750000
- tomekkorbak/pii-pile-chunk3-1750000-1800000
- tomekkorbak/pii-pile-chunk3-1800000-1850000
- tomekkorbak/pii-pile-chunk3-1850000-1900000
- tomekkorbak/pii-pile-chunk3-1900000-1950000
model-index:
- name: musing_colden
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# musing_colden
This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000',
'tomekkorbak/pii-pile-chunk3-50000-100000',
'tomekkorbak/pii-pile-chunk3-100000-150000',
'tomekkorbak/pii-pile-chunk3-150000-200000',
'tomekkorbak/pii-pile-chunk3-200000-250000',
'tomekkorbak/pii-pile-chunk3-250000-300000',
'tomekkorbak/pii-pile-chunk3-300000-350000',
'tomekkorbak/pii-pile-chunk3-350000-400000',
'tomekkorbak/pii-pile-chunk3-400000-450000',
'tomekkorbak/pii-pile-chunk3-450000-500000',
'tomekkorbak/pii-pile-chunk3-500000-550000',
'tomekkorbak/pii-pile-chunk3-550000-600000',
'tomekkorbak/pii-pile-chunk3-600000-650000',
'tomekkorbak/pii-pile-chunk3-650000-700000',
'tomekkorbak/pii-pile-chunk3-700000-750000',
'tomekkorbak/pii-pile-chunk3-750000-800000',
'tomekkorbak/pii-pile-chunk3-800000-850000',
'tomekkorbak/pii-pile-chunk3-850000-900000',
'tomekkorbak/pii-pile-chunk3-900000-950000',
'tomekkorbak/pii-pile-chunk3-950000-1000000',
'tomekkorbak/pii-pile-chunk3-1000000-1050000',
'tomekkorbak/pii-pile-chunk3-1050000-1100000',
'tomekkorbak/pii-pile-chunk3-1100000-1150000',
'tomekkorbak/pii-pile-chunk3-1150000-1200000',
'tomekkorbak/pii-pile-chunk3-1200000-1250000',
'tomekkorbak/pii-pile-chunk3-1250000-1300000',
'tomekkorbak/pii-pile-chunk3-1300000-1350000',
'tomekkorbak/pii-pile-chunk3-1350000-1400000',
'tomekkorbak/pii-pile-chunk3-1400000-1450000',
'tomekkorbak/pii-pile-chunk3-1450000-1500000',
'tomekkorbak/pii-pile-chunk3-1500000-1550000',
'tomekkorbak/pii-pile-chunk3-1550000-1600000',
'tomekkorbak/pii-pile-chunk3-1600000-1650000',
'tomekkorbak/pii-pile-chunk3-1650000-1700000',
'tomekkorbak/pii-pile-chunk3-1700000-1750000',
'tomekkorbak/pii-pile-chunk3-1750000-1800000',
'tomekkorbak/pii-pile-chunk3-1800000-1850000',
'tomekkorbak/pii-pile-chunk3-1850000-1900000',
'tomekkorbak/pii-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25177],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048}],
'scorer_config': {}},
'kl_gpt3_callback': {'force_call_on': [25177],
'gpt3_kwargs': {'model_name': 'davinci'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'alpha': 1, 'name': 'Unlikelihood', 'score_threshold': 0.0},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'musing_colden',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output2',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25177,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/chijl93q |
BatuhanYilmaz/mt5-small-finetuned-amazonbooks-en-es | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: HateXplain-majority-labeled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HateXplain-majority-labeled
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4749
- Accuracy: 0.6708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Baybars/debateGPT | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: mlp
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 239.97 +/- 62.39
name: mean_reward
verified: false
---
# **mlp** Agent playing **LunarLander-v2**
This is a trained model of a **mlp** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BearThreat/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- sst2
model-index:
- name: finetuned_gpt2-large_sst2_negation0.001_pretrainedTrue_epochs2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_gpt2-large_sst2_negation0.001_pretrainedTrue_epochs2
This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0066 | 1.0 | 1322 | 2.9427 |
| 1.5196 | 2.0 | 2644 | 3.0769 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.7.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Beelow/model | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="kyeongpil/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Begimay/Task | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
inference:
parameters:
max_length: 250
temperature: 0.7
top_p: 1
license: creativeml-openrail-m
widget:
- text: 用户:帮我写个请假条,我因为新冠不舒服,需要请假3天,请领导批准\n小元:
- text: >-
用户:新冠什么症状?\n小元:新冠是指新型冠状病毒,其症状包括发热、干咳、乏力、嗅味觉减退、呼吸困难等。\n用户:可以吃什么药?\n小元:根据您提供的病史,目前没有明确的抗新冠病毒的药物,建议您在家进行自我隔离,避免与他人接触,多喝开水,清淡易消化饮食,避免熬夜和过度劳累,适当进行户外活动。\n用户:用什么后遗症么?\n小元:
library_name: paddlenlp
language:
- zh
pipeline_tag: text2text-generation
---
<a href="https://colab.research.google.com/drive/14B5_EyAXduE1WTaPiDs90-4Atsy7uaoy?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg"></a>
ChatYuan: 元语功能型对话大模型
本页面的模型文件是ChatYuan-large-v1适应PaddleNLP转化得到的。功能与ChatYuan-large-v1一致,可以用于问答、结合上下文做对话、做各种生成任务,包括创意性写作,也能回答一些像法律、新冠等领域问题。ChatYuan-large-v1基于PromptCLUE-large结合数亿条功能对话多轮对话数据进一步训练得到。
<a href='https://www.cluebenchmarks.com/clueai.html'>PromptCLUE-large:</a>在1000亿token中文语料上预训练,累计学习1.5万亿中文token,并且在数百种任务上进行Prompt任务式训练。针对理解类任务,如分类、情感分析、抽取等,可以自定义标签体系;针对多种生成任务,可以进行采样自由生成。
<a href='https://www.yuanyu.ai'>在线Demo(微信搜索小程序“元语智能”)</a> |
<a href='https://www.clueai.cn'>使用API(large版)</a> |
<a href='https://github.com/clue-ai/ChatYuan'>Github项目地址</a> |
<a href='https://colab.research.google.com/drive/14B5_EyAXduE1WTaPiDs90-4Atsy7uaoy?usp=sharing#scrollTo=QokO0pdGmAYH'>Colab在线试用</a>
微信扫码在线体验:
<img src="https://huggingface.co/ClueAI/ChatYuan-large-v1/resolve/main/chatyuan_wechat.jpg" width="30%" height="30%" />
加载模型:
```python
# 加载模型
from paddlenlp.transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("ClueAI/ChatYuan-large-v1", from_hf_hub=False)
model = T5ForConditionalGeneration.from_pretrained("ClueAI/ChatYuan-large-v1", from_hf_hub=False)
```
使用模型进行预测推理方法:
```python
# 使用
import torch
# 这里使用的是paddle的gpu版本,修改colab笔记本设置为gpu,推理更快
def preprocess(text):
text = text.replace("\n", "\\n").replace("\t", "\\t")
return text
def postprocess(text):
return text.replace("\\n", "\n").replace("\\t", "\t")
def answer(text, sample=True, top_p=1, temperature=0.7):
'''sample:是否抽样。生成任务,可以设置为True;
top_p:0-1之间,生成的内容越多样'''
text = preprocess(text)
encoding = tokenizer(text=[text], truncation=True, padding=True, max_length=768, return_tensors="pd")
if not sample:
out = model.generate(**encoding, return_dict_in_generate=True, output_scores=False, max_length=512, max_new_tokens=512, num_beams=1, length_penalty=0.4)
else:
out = model.generate(**encoding, return_dict_in_generate=True, output_scores=False, max_length=512, max_new_tokens=512, do_sample=True, top_p=top_p, temperature=temperature, no_repeat_ngram_size=3)
out_text = tokenizer.batch_decode(out[0], skip_special_tokens=True)
return postprocess(out_text[0])
print("end...")
```
# 问答、写作与功能型助手
```python
input_text0 = "帮我写一个请假条,我因为新冠不舒服,需要请假3天,请领导批准"
input_text1 = "你能干什么"
input_text2 = "用英文写一封道歉的邮件,表达因为物流延误,不能如期到达,我们可以赔偿贵公司所有损失"
input_text3 = "写一个文章,题目是未来城市"
input_text4 = "写一个诗歌,关于冬天"
input_text5 = "从南京到上海的路线"
input_text6 = "学前教育专业岗位实习中,在学生方面会存在问题,请提出改进措施。800字"
input_text7 = "根据标题生成文章:标题:屈臣氏里的化妆品到底怎么样?正文:化妆品,要讲究科学运用,合理搭配。屈臣氏起码是正品连锁店。请继续后面的文字。"
input_text8 = "帮我对比几款GPU,列出详细参数对比,并且给出最终结论"
input_list = [input_text0, input_text1, input_text2, input_text3, input_text4, input_text5, input_text6, input_text7, input_text8]
for i, input_text in enumerate(input_list):
input_text = "用户:" + input_text + "\n小元:"
print(f"示例{i}".center(50, "="))
output_text = answer(input_text)
print(f"{input_text}{output_text}")
```
=======================示例0========================
用户:帮我写一个请假条,我因为新冠不舒服,需要请假3天,请领导批准
小元:尊敬的领导:
您好!
因为最近新冠病毒的流行,我感到身体不舒服,所以请求请假3天,在这段时间内,我将积极配合医生的治疗,保证充足的睡眠,并做好防护工作。
在此期间,我保证将努力完成我的工作,并按时下班,在请假期间,我绝对遵守公司的规章制度,接受公司的监督和检查,请领导批准。
谢谢!
此致
敬礼!
=======================示例1========================
用户:你能干什么
小元:您好!我是元语AI。我可以回答您的问题、写文章、写作业、翻译,对于一些法律等领域的问题我也可以给你提供信息。
=======================示例2========================
用户:用英文写一封道歉的邮件,表达因为物流延误,不能如期到达,我们可以赔偿贵公司所有损失
小元:E-mail apologies to your company for not reaching your delivery due to delay in logistics, we can compensate you for all losses.
=======================示例3========================
用户:写一个文章,题目是未来城市
小元:未来的城市是怎样的呢?
未来城市拥有很多不同的功能,如智慧城市、智能交通系统、远程医疗系统、智能家居系统、智慧医疗系统等。未来城市还使用物联网技术,可以远程控制,使用远程操控和传感器来监控城市的发展,帮助他们解决各种城市问题。
未来的城市还采用互联网技术,可以让城市变得更智能,让城市变得更智慧,让每一个人都能更轻松地生活。未来城市的智能设备也可以让城市的使用者更方便地掌握他们的生活,比如智能的智能交通系统和远程医疗系统和智能的安防系统等,可以提供他们更多的便利。
总之,未来城市的愿景是,让所有城市的使用者都能拥有一个美好的未来,让他们可以在自己的城市享受美好的生活。
=======================示例4========================
用户:写一个诗歌,关于冬天
小元:冬夜寂静冷,
云在天边飘,
冰封白雪上,
寒冷像一场雪。
雪花融化成冰,
像那雪花飘洒,
在寒冷的冬天,
感受春天的喜悦。
冬日里,
风雪渐消,
一片寂静,
把快乐和温暖带回家。
=======================示例5========================
用户:从南京到上海的路线
小元:你好,南京到上海的路线如下:
1. 南京到上海,可以乘坐南京地铁1号线,在南京站乘坐轨道交通1号线。
2. 南京到浦东机场,可以搭乘上海地铁1号,在陆家嘴站乘坐地铁1线,在浦东国际机场站乘坐机场快线,前往上海浦东国际机场。
3. 上海到南京,可以换乘上海地铁2号线,从南京站换乘地铁2线,再从南京南站换乘地铁1路,然后到达上海站。
=======================示例6========================
用户:学前教育专业岗位实习中,在学生方面会存在问题,请提出改进措施。800字
小元:在学前教育专业岗位实习中,在学生方面会存在一些问题,如学生缺少专业知识、缺乏合作意识、缺乏基本的责任心等等,需要我们采取措施,来解决这些问题。
首先,在实习过程中,我们需要培养学生的团队意识,加强学生与老师之间的合作,加强与家长之间的沟通,提高他们的沟通能力。另外,我们也需要加强学生的独立性,让学生学会独立思考,学会和老师、家长进行沟通,培养其独立性。
此外,在日常的工作中,我们也要为学生提供适当的机会和机会,让学生多接触社会、多参加社会活动,加深他们对社会、学校的认识,培养他们主动、积极和创新的精神。
最后,我们还需要重视学生在实习中出现的问题,尤其是学生缺乏合作,有依赖心理,不懂得分享,有拖延行为等,这些学生应该引起我们的重视,制定合理的改进措施,改善这些问题。
=======================示例7========================
用户:根据标题生成文章:标题:屈臣氏里的化妆品到底怎么样?正文:化妆品,要讲究科学运用,合理搭配。屈臣氏起码是正品连锁店。请继续后面的文字。
小元:在屈臣氏买东西,要讲究科学运用,合理搭配。屈臣家起码是正品连锁店。下面,我就拿“化妆品”来说说屈臣家的化妆品。“化妆品”(护肤品、粉底液)产品种类多,产品品牌多,销售方式多样,产品种类比较齐全。屈氏是全港最大最大的化妆品超市之一。屈购是屈臣买日用品有限公司的简称。屈购物品在屈购商场经营,在屈购物市经营的屈购公司有7家,分别是:屈购商城、屈购超市、屈取电器、屈拿电器、悦活、悦养、雅活,在“屈购”经营的商品有:洗发水、眼霜、口红、面霜、润肤露、面膜、洗发精、沐浴露、香皂、化妆品、沐浴产品、洁肤产品、洗衣液、护发素、洗护用品、化妆品用品、男士化妆品。屈买商品有:护肤霜、护肤粉底霜、眼影、指甲油、香水、身体洗发露、防晒产品、防晒用品、润发产品、美容工具、化妆品等。屈商的化妆品品种繁多,种类繁多,其中化妆品有100多种,占香港商品总量的
=======================示例8========================
用户:帮我对比几款GPU,列出详细参数对比,并且给出最终结论
小元:一、显存容量:
1 GeForce GTX 880 显存类型:GDDR5
2 Geforce RTX 1060 显存在类型:PhysX物理加速
3 Ge force MX150 显存在于类型:SLI
4 GetoDirectX 11:支持DX11
二、显显存带宽:
1.GDDR4X 显显容量:256MB
2.GDDR6X 最大显存范围:8GB
3.GDDR3 显在带宽:120GB
4.GDDR7 显适用于2GB显存
三、显效时间:
1.4 GB/s
2. 5.5 ms
3. 5 ms
# 多轮对话
```python
input_text = ["你好","新冠什么症状?","可以吃什么药?"]
answer_text = ["您好!我是元语AI。我可以回答您的问题、写文章、写作业、翻译,对于一些法律等领域的问题我也可以给你提供信息", "新冠是指新型冠状病毒,其症状包括发热、干咳、乏力、嗅味觉减退、呼吸困难等。", "根据您提供的病史,目前没有明确的抗新冠病毒的药物,建议您在家进行自我隔离,避免与他人接触,多喝开水,清淡易消化饮食,避免熬夜和过度劳累,适当进行户外活动。"]
context = "\n".join([f"用户:{input_text[i]}\n小元:{answer_text[i]}" for i in range(len(input_text))])
print(context)
input_text = "用什么后遗症么?"
print(f"示例".center(50, "="))
input_text = context + "\n用户:" + input_text + "\n小元:"
output_text = answer(input_text)
print(f"{input_text}{output_text}")
```
========================示例========================
用户:你好
小元:您好!我是元语AI。我可以回答您的问题、写文章、写作业、翻译,对于一些法律等领域的问题我也可以给你提供信息
用户:新冠什么症状?
小元:新冠是指新型冠状病毒,其症状包括发热、干咳、乏力、嗅味觉减退、呼吸困难等。
用户:可以吃什么药?
小元:根据您提供的病史,目前没有明确的抗新冠病毒的药物,建议您在家进行自我隔离,避免与他人接触,多喝开水,清淡易消化饮食,避免熬夜和过度劳累,适当进行户外活动。
用户:用什么后遗症么?
小元:目前还没有人具体说是什么后遗症,但是目前症状比较轻的,可能没有后遗症,但是如果症状比较重,就可能出现呼吸困难,胸闷,发热,咳嗽等症状。
### 技术交流和问题反馈
<a href='https://github.com/clue-ai/ChatYuan#%E6%8A%80%E6%9C%AF%E4%BA%A4%E6%B5%81%E5%92%8C%E9%97%AE%E9%A2%98%E5%8F%8D%E9%A6%88%E6%89%AB%E7%A0%81%E5%9C%A8%E7%BA%BF%E4%BD%93%E9%AA%8C%E5%B0%8F%E7%A8%8B%E5%BA%8F%E6%88%96%E5%85%A5%E7%BE%A4'>加微信入讨论群</a> |
Bella4322/Sarah | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="kyeongpil/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BenDavis71/GPT-2-Finetuning-AIRaid | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
language:
- zh
license: creativeml-openrail-m
widget:
- text: |-
这是关于哪方面的新闻:
如果日本沉没,中国会接收日本难民吗?
选项:故事,文化,娱乐,体育,财经,房产,汽车,教育,科技,军事,旅游,国际,股票,农业,游戏
答案:
- text: |-
以下两句话是否表达相同意思:
文本1:糖尿病腿麻木怎么办?
文本2:糖尿病怎样控制生活方式
选项:相似,不相似
答案:
- text: |-
阅读以下对话并回答问题。
男:今天怎么这么晚才来上班啊?女:昨天工作到很晚,而且我还感冒了。男:那你回去休息吧,我帮你请假。女:谢谢你。
问题:女的怎么样?
选项:正在工作,感冒了,在打电话,要出差。
答案:
- text: |-
信息抽取:
张玄武1990年出生中国国籍无境外居留权博士学历现任杭州线锁科技技术总监。
问题:机构,人名,职位,籍贯,专业,国籍,种族
答案:
- text: >-
抽取关键词:
当地时间21日,美国联邦储备委员会宣布加息75个基点,将联邦基金利率目标区间上调到3.00%至3.25%之间,符合市场预期。这是美联储今年以来第五次加息,也是连续第三次加息,创自1981年以来的最大密集加息幅度。
关键词:
- text: |-
翻译成中文:
This is a dialogue robot that can talk to people.
答案:
- text: >-
为下面的文章生成摘要:
北京时间9月5日12时52分,四川甘孜藏族自治州泸定县发生6.8级地震。地震发生后,领导高度重视并作出重要指示,要求把抢救生命作为首要任务,全力救援受灾群众,最大限度减少人员伤亡
摘要:
- text: |-
推理关系判断:
前提:小明明天要去北京
假设:小明计划明天去上海
选项:矛盾,蕴含,中立
答案:
- text: |-
问答:
问题:小米的创始人是谁?
答案:
library_name: paddlenlp
pipeline_tag: text2text-generation
---
<a href="https://colab.research.google.com/drive/1hlSMYEq3pyX-fwTSqIOT1um80kU1yOJF?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg"></a>
PromptCLUE:全中文任务零样本学习模型
这个模型是PromptCLUE-base模型适应PaddleNLP转化得到的。同样基于1000亿token中文语料上预训练,累计学习1.5万亿中文token,并且在数百种任务上进行Prompt任务式训练。针对理解类任务,如分类、情感分析、抽取等,可以自定义标签体系;针对多种生成任务,可以进行采样自由生成。
<a href='https://www.cluebenchmarks.com/clueai.html'>在线Demo</a> |
<a href='https://www.clueai.cn'>使用clueai工具包和API(large版)</a> |
<a href='https://github.com/clue-ai/PromptCLUE'>Github项目地址</a> |
<a href='https://colab.research.google.com/drive/1hlSMYEq3pyX-fwTSqIOT1um80kU1yOJF?usp=sharing'>Colab试用</a>
加载模型:
```python
# 加载模型
from paddlenlp.transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("ClueAI/PromptCLUE-base", from_hf_hub=False)
model = T5ForConditionalGeneration.from_pretrained("ClueAI/PromptCLUE-base", from_hf_hub=False)
```
使用模型进行预测推理方法:
```python
import torch
#这里使用的是paddle的gpu版本,推理更快
def preprocess(text):
return text.replace("\n", "_")
def postprocess(text):
return text.replace("_", "\n")
def answer(text, sample=False, top_p=0.8):
'''sample:是否抽样。生成任务,可以设置为True;
top_p:0-1之间,生成的内容越多样'''
text = preprocess(text)
encoding = tokenizer(text=[text], truncation=True, padding=True, max_length=768, return_tensors="pt").to(device)
if not sample:
out = model.generate(**encoding, return_dict_in_generate=True, output_scores=False, max_length=128, num_beams=4, length_penalty=0.6)
else:
out = model.generate(**encoding, return_dict_in_generate=True, output_scores=False, max_length=64, do_sample=True, top_p=top_p)
out_text = tokenizer.batch_decode(out[0], skip_special_tokens=True)
return postprocess(out_text[0])
```
### 示例输入
#### 新闻分类(classify)
```bash
Input:
分类任务:
折价率过低遭抛售基金泰和跌7.15%,证券时报记者 朱景锋本报讯 由于折价率在大盘封基中处于最低水平,基金泰和昨日遭到投资者大举抛售,跌幅达到7.15%,远超大盘。盘面显示,基金泰和随大盘高开,之后开始震荡走低,午后开始加速下行,几乎没有像样反弹。截至收盘时,在沪深300指数仅下跌2.56%的情况下,基金泰和收盘跌幅高达7.15%,在所有封基中跌幅最大,而昨日多数封基跌幅在2%左右。
选项:财经,娱乐,时政,股票
答案:
Model output:
财经
```
#### 意图分类(classify)
```bash
Input:
意图分类:
帮我定一个周日上海浦东的房间
选项:闹钟,文学,酒店,艺术,体育,健康,天气,其他
答案:
Model output:
酒店
```
#### 情感分析(classify)
```bash
Input:
情感分析:
这个看上去还可以,但其实我不喜欢
选项:积极,消极
答案:
Model output:
消极
```
#### 推理(generate)
```bash
Input:
请推理出上下文的关系:
前提:对不起事情就是这样。
假设:事情就是这样,不需要道歉。
选项:中立,蕴涵,矛盾
答案:
Model output:
矛盾
```
#### 阅读理解(generate)
```bash
Input:
阅读文章,给出答案:
段落:
港汇指数,全称港元实际汇兑指数(Effective Exchange Rate Index for the Hong Kong Dollar)是由香港政府统计处编制的一项指数,以反映港元与香港主要贸易伙伴之货币的名义有效汇率加权平均数的变动情况。加权比重是按1999年至2000年平均贸易模式所制定,但政府并未有公布详细的计算公式。旧港汇指数基准日为2000年1月1日,基数为100点。由2012年1月3日起,新系列港汇指数 (包括15种货币及以2010年1月 = 100) 已取代旧港汇指数系列。港汇指数的作用,主要是用于反映香港的货品及服务的价格相对于其主要贸易伙伴的变动,并通常被视作反映香港价格竞争力的指标。
问题:港汇指数的加权比重如何制定?
答案:
Model output:
按1999年至2000年平均贸易模式所制定
```
#### 阅读理解-自由式(generate)
```bash
Input:
阅读以下对话并回答问题。
男:今天怎么这么晚才来上班啊?女:昨天工作到很晚,而且我还感冒了。男:那你回去休息吧,我帮你请假。女:谢谢你。
问题:女的怎么样?
选项:正在工作,感冒了,在打电话,要出差。
答案:
Model output:
感冒了
```
#### 摘要(generate)
```bash
Input:
为下面的文章生成摘要:
北京时间9月5日12时52分,四川甘孜藏族自治州泸定县发生6.8级地震。地震发生后,领导高度重视并作出重要指示,要求把抢救生命作为首要任务,全力救援受灾群众,最大限度减少人员伤亡
答案:
Model output:
四川甘孜发生6.8级地震
```
#### 翻译-中英(generate)
```bash
Input:
翻译成英文:
议长去了台湾,中国人民很愤怒。
答案:
Model output:
The secretary went to Taiwan and the Chinese people were angry.
```
#### 翻译-英中(generate)
```bash
Input:
翻译成中文:
This is a dialogue robot that can talk to people.
答案:
Model output:
这是一台可以与人交谈的对话机器人。
```
#### 通用信息抽取(generate)
```bash
Input:
信息抽取:
据新华社电广东省清远市清城区政府昨日对外发布信息称,日前被实名举报涉嫌勒索企业、说“分分钟可以搞垮一间厂”的清城区环保局局长陈柏,已被免去清城区区委委员
问题:机构名,人名,职位
答案:
Model output:
机构名:新华社,清城区政府,清城区环保局,清城区区委
人名:陈柏
职位:局长,区委委员
```
#### 简历信息抽取(generate)
```bash
Input:
阅读文本抽取关键信息:
张玄武2000年出生中国国籍无境外居留权博士学历现任杭州线锁科技技术总监。
问题:机构,人名,职位,籍贯,专业,国籍,学历,种族
答案:
Model output:
人名:张玄武
职位:杭州线锁科技技术总监
国籍:中国国籍
学历:博士学历
```
#### 医疗信息抽取(generate)
```bash
Input:
从文本中抽取信息:
患者精神可,饮食可,睡眠可,二便正常。患者通过综合治疗头晕症状较前减轻,患者继续口服改善脑血管及调整血压变化药物。
问题:症状,治疗,检查,身体部位,疾病
答案:
Model output:
症状:头晕
治疗:改善脑血管及调整血压变化药物
身体部位:二便
```
#### 电商客户需求分析(classify)
```bash
Input:
电商客户诉求分类:
收到但不太合身,可以退换吗
选项:买家咨询商品是否支持花呗付款,买家表示收藏关注店铺,买家咨询退换货规则,买家需要商品推荐
答案:
Model output:
买家咨询退换货规则
```
#### 医疗语义相似度(classify)
```bash
Input:
下面句子是否表示了相同的语义:
文本1:糖尿病腿麻木怎么办?
文本2:糖尿病怎样控制生活方式
选项:相似,不相似
答案:
Model output:
不相似
```
#### 问题生成(generate)
```bash
Input:
问题生成:
中新网2022年9月22日电 22日,商务部召开例行新闻发布会,商务部新闻发言人束珏婷表示,今年1-8月,中国实际使用外资1384亿美元,增长20.2%;其中,欧盟对华投资增长123.7%(含通过自由港投资数据)。这充分表明,包括欧盟在内的外国投资者持续看好中国市场,希望继续深化对华投资合作。
答案:
Model output:
1、商务部发言人束珏婷表示,中国去年1-8月实际使用外资1384亿美元,增长20.2%,其中欧盟对华投资增长多少?
2、欧盟对华投资的增长是否达到预期?
3、欧盟在对华投资合作方面取得什么样的成果?
```
#### 指代消解(generate)
```bash
Input:
指代消解:
段落:
少平跟润叶进了她二爸家的院子,润生走过来对他(代词)说:“我到宿舍找了你两回,你到哪里去了?”
问题:代词“他”指代的是?
答案:
Model output:
少平
```
#### 关键词抽取(generate)
```bash
Input:
抽取关键词:
当地时间21日,美国联邦储备委员会宣布加息75个基点,将联邦基金利率目标区间上调到3.00%至3.25%之间,符合市场预期。这是美联储今年以来第五次加息,也是连续第三次加息,创自1981年以来的最大密集加息幅度。
关键词:
Model output:
美联储,利率目标区间,加息,基点
```
#### 情感倾向(classify)
```bash
文字中包含了怎样的情感:
超可爱的帅哥,爱了。。。
选项:厌恶,喜欢,开心,悲伤,惊讶,生气,害怕
答案:
Model output:
喜欢
```
更多示例功能和模型见
[ClueAI](https://github.com/clue-ai/PromptCLUE) |
Bharathdamu/wav2vec2-large-xls-r-300m-hindi-colab | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- wildcard
widget:
- text: illustration of a tthero tank sitting on top of the deck of a battle ship
traveling through the open sea with a lot of ships surrounding it
---
# DreamBooth model for the tthero concept trained by cleexiang.
This is a Stable Diffusion model fine-tuned on the tthero concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of tthero tank**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `tank` images for the wildcard theme,
for the Hugging Face DreamBooth Hackathon, from the HF CN Community,
corporated with the HeyWhale.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('cleexiang/tthero-tank-heywhale')
image = pipeline().images[0]
image
```
|
Bharathdamu/wav2vec2-large-xls-r-300m-hindi2-colab | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- sst2
model-index:
- name: finetuned_gpt2-medium_sst2_negation0.01_pretrainedFalse_epochs3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_gpt2-medium_sst2_negation0.01_pretrainedFalse_epochs3
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2826 | 1.0 | 1323 | 2.8903 |
| 1.9713 | 2.0 | 2646 | 2.9835 |
| 1.86 | 3.0 | 3969 | 3.0533 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.7.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Bharathdamu/wav2vec2-model-hindibhasha | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- sst2
model-index:
- name: finetuned_distilgpt2_sst2_negation0.01_pretrainedFalse_epochs3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_distilgpt2_sst2_negation0.01_pretrainedFalse_epochs3
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6821 | 1.0 | 1323 | 3.2535 |
| 2.5045 | 2.0 | 2646 | 3.2502 |
| 2.4511 | 3.0 | 3969 | 3.2579 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.7.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Bhuvana/t5-base-spellchecker | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 93 | 2023-01-17T08:14:54Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Bia18/Beatriz | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.37 +/- 0.15
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BigSalmon/DaBlank | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 4 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="neatbullshit/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
BigSalmon/FormalBerta2 | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 238.50 +/- 117.24
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RisiPisi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RisiPisi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga RisiPisi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
BigSalmon/GPT2HardArticleEasyArticle | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: mit
language: ja
tags:
- luke
- question-answering
- squad
- pytorch
- transformers
- question answering
---
# このモデルはluke-japanese-large-liteをファインチューニングして、Question-Answeringに用いれるようにしたものです。
このモデルはluke-japanese-large-liteを運転ドメインQAデータセット(DDQA)( https://nlp.ist.i.kyoto-u.ac.jp/index.php?Driving%20domain%20QA%20datasets )を用いてファインチューニングしたものです。
Question-Answeringタスク(SQuAD)に用いることができます。
# This model is fine-tuned model for Question-Answering which is based on luke-japanese-large-lite
This model is fine-tuned by using DDQA dataset.
You could use this model for Question-Answering tasks.
# モデルの精度 accuracy of model
'em(厳密一致)': 0.8631578947368421, 'f1': 0.9302271135164113
# How to use 使い方
sentencepieceとtransformersをインストールして (pip install sentencepiece , pip install transformers)
以下のコードを実行することで、Question-Answeringタスクを解かせることができます。
please execute this code.
```python
import torch
from transformers import AutoTokenizer, LukeForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained('Mizuiro-sakura/luke-japanese-large-finetuned-QA')
model=LukeForQuestionAnswering.from_pretrained('Mizuiro-sakura/luke-japanese-large-finetuned-QA') # 学習済みモデルの読み込み
text={
'context':'私の名前はEIMIです。好きな食べ物は苺です。 趣味は皆さんと会話することです。',
'question' :'好きな食べ物は何ですか'
}
input_ids=tokenizer.encode(text['question'],text['context']) # tokenizerで形態素解析しつつコードに変換する
output= model(torch.tensor([input_ids])) # 学習済みモデルを用いて解析
prediction = tokenizer.decode(input_ids[torch.argmax(output.start_logits): torch.argmax(output.end_logits)]) # 答えに該当する部分を抜き取る
print(prediction)
```
# what is Luke? Lukeとは?[1]
LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transformer. LUKE treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. LUKE adopts an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores.
LUKE achieves state-of-the-art results on five popular NLP benchmarks including SQuAD v1.1 (extractive question answering), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), TACRED (relation classification), and Open Entity (entity typing). luke-japaneseは、単語とエンティティの知識拡張型訓練済み Transformer モデルLUKEの日本語版です。LUKE は単語とエンティティを独立したトークンとして扱い、これらの文脈を考慮した表現を出力します。
# Acknowledgments 謝辞
Lukeの開発者である山田先生とStudio ousiaさんには感謝いたします。 I would like to thank Mr.Yamada @ikuyamada and Studio ousia @StudioOusia.
# Citation
[1]@inproceedings{yamada2020luke, title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention}, author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto}, booktitle={EMNLP}, year={2020} }
|
BigSalmon/GPTNeo350MInformalToFormalLincoln4 | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
]
| text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | 2023-01-17T09:18:22Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: wikineural-multilingual-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wikineural-multilingual-ner
This model is a fine-tuned version of [Babelscape/wikineural-multilingual-ner](https://huggingface.co/Babelscape/wikineural-multilingual-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1001
- Precision: 0.5714
- Recall: 0.4364
- F1: 0.4948
- Accuracy: 0.9745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu117
- Datasets 2.6.1
- Tokenizers 0.13.2
|
BigSalmon/InformalToFormalLincoln21 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="css919/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BigSalmon/InformalToFormalLincoln22 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
language:
- ko
library_name: doctr
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
``` |
BigSalmon/MrLincoln14 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace |
BigSalmon/Neo | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | 2023-01-17T10:45:42Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: imclasif-content-v001
results:
- task:
name: Image genre Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8111587762832642
---
# imclasif-content-v001
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). |
BigSalmon/ParaphraseParentheses | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="pruvostmichael/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BigSalmon/ParaphraseParentheses2.0 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: jrauch4/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BigSalmon/T5Salmon2 | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 13 | 2023-01-17T11:24:59Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 11.60 +/- 5.57
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BigSalmon/TS3 | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible",
"has_space"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language:
- en
license: gpl-3.0
tags:
- misogyny detection
- abusive language
- hate speech
- offensive language
widget:
- text: I believe religious minorities need to be protected more.
example_title: Hate Speech Detection Example 1
pipeline_tag: text-classification
datasets:
- nedjmaou/MLMA_hate_speech
---
# Entropy-based Attention Regularization 👂
This is an English BERT fine-tuned with [Entropy-based Attention Regularization](https://aclanthology.org/2022.findings-acl.88/) to reduce lexical overfitting to specific words on the task of Misogyny Identification.
Use this model if you want a debiased alternative to a BERT classifier.
Please refer to the paper to know all the training details.
## Dataset
The model was fine-tuned on the English part of the [MLMA dataset](https://aclanthology.org/D19-1474/).
## Model
This model is the fine-tuned version of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) model.
We trained a total of three versions for Italian and English.
| Model | Download |
| ------ | -------------------------|
| `bert-base-uncased-ear-misogyny` | [Link](https://huggingface.co/MilaNLProc/bert-base-uncased-ear-misogyny) |
| `bert-base-uncased-ear-mlma` | [Link](https://huggingface.co/MilaNLProc/bert-base-uncased-ear-mlma) |
| `bert-base-uncased-ear-misogyny-italian` | [Link](https://huggingface.co/MilaNLProc/bert-base-uncased-ear-misogyny-italian) |
# Authors
- [Giuseppe Attanasio](https://gattanasio.cc/)
- [Debora Nozza](http://dnozza.github.io/)
- [Dirk Hovy](https://federicobianchi.io/)
- [Elena Baralis](https://dbdmg.polito.it/wordpress/people/elena-baralis/)
# Citation
Please use the following BibTeX entry if you use this model in your project:
```
@inproceedings{attanasio-etal-2022-entropy,
title = "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists",
author = "Attanasio, Giuseppe and
Nozza, Debora and
Hovy, Dirk and
Baralis, Elena",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.88",
doi = "10.18653/v1/2022.findings-acl.88",
pages = "1105--1119",
abstract = "Natural Language Processing (NLP) models risk overfitting to specific terms in the training data, thereby reducing their performance, fairness, and generalizability. E.g., neural hate speech detection models are strongly influenced by identity terms like gay, or women, resulting in false positives, severe unintended bias, and lower performance.Most mitigation techniques use lists of identity terms or samples from the target domain during training. However, this approach requires a-priori knowledge and introduces further bias if important terms are neglected.Instead, we propose a knowledge-free Entropy-based Attention Regularization (EAR) to discourage overfitting to training-specific terms. An additional objective function penalizes tokens with low self-attention entropy.We fine-tune BERT via EAR: the resulting model matches or exceeds state-of-the-art performance for hate speech classification and bias metrics on three benchmark corpora in English and Italian.EAR also reveals overfitting terms, i.e., terms most likely to induce bias, to help identify their effect on the model, task, and predictions.",
}
```
# Limitations
Entropy-Attention Regularization mitigates lexical overfitting but does not completely remove it. We expect the model still to show biases, e.g., peculiar keywords that induce a specific prediction regardless of the context.
Please refer to our paper for a quantitative evaluation of this mitigation.
# License
[GNU GPLv3](https://choosealicense.com/licenses/gpl-3.0/) |
BigTooth/DialoGPT-small-tohru | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ranajoy98/autotrain-data-contract_types
co2_eq_emissions:
emissions: 0.004185439260806501
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2926484993
- CO2 Emissions (in grams): 0.0042
## Validation Metrics
- Loss: 0.106
- Accuracy: 0.981
- Macro F1: 0.977
- Micro F1: 0.981
- Weighted F1: 0.980
- Macro Precision: 0.983
- Micro Precision: 0.981
- Weighted Precision: 0.982
- Macro Recall: 0.975
- Micro Recall: 0.981
- Weighted Recall: 0.981
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ranajoy98/autotrain-contract_types-2926484993
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ranajoy98/autotrain-contract_types-2926484993", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ranajoy98/autotrain-contract_types-2926484993", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
BinksSachary/ShaxxBot2 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: happycoding/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Blabla/Pipipopo | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: DRL-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="eolang/DRL-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Blackmist786/DialoGPt-small-transformers4 | [
"pytorch"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- swww/autotrain-data-mm
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.3584667794035356
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 2927885005
- CO2 Emissions (in grams): 0.3585
## Validation Metrics
- Loss: 0.015
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000 |
Blazeolmo/Scrabunzi | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_keras_callback
model-index:
- name: Ashraf-kasem/gpt2_fine_tune_with_callback_PolynomialDecay_from_local
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Ashraf-kasem/gpt2_fine_tune_with_callback_PolynomialDecay_from_local
This model is a fine-tuned version of [Ashraf-kasem/gpt2_fine_tune_with_callback_PolynomialDecay_from_local](https://huggingface.co/Ashraf-kasem/gpt2_fine_tune_with_callback_PolynomialDecay_from_local) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.4591
- Validation Loss: 4.1433
- Epoch: 49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 231100, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.0567 | 3.4196 | 0 |
| 2.0328 | 3.4604 | 1 |
| 2.0056 | 3.5015 | 2 |
| 1.9789 | 3.5125 | 3 |
| 1.9530 | 3.5556 | 4 |
| 1.9285 | 3.5970 | 5 |
| 1.9051 | 3.6428 | 6 |
| 1.8823 | 3.6087 | 7 |
| 1.8607 | 3.6300 | 8 |
| 1.8402 | 3.6607 | 9 |
| 1.8202 | 3.7323 | 10 |
| 1.8014 | 3.7363 | 11 |
| 1.7832 | 3.7573 | 12 |
| 1.7660 | 3.7414 | 13 |
| 1.7493 | 3.7810 | 14 |
| 1.7330 | 3.8443 | 15 |
| 1.7175 | 3.8305 | 16 |
| 1.7029 | 3.8547 | 17 |
| 1.6887 | 3.8189 | 18 |
| 1.6753 | 3.8725 | 19 |
| 1.6622 | 3.9050 | 20 |
| 1.6498 | 3.9306 | 21 |
| 1.6376 | 3.9670 | 22 |
| 1.6262 | 3.9569 | 23 |
| 1.6150 | 3.9473 | 24 |
| 1.6044 | 3.9695 | 25 |
| 1.5943 | 3.9193 | 26 |
| 1.5844 | 3.9739 | 27 |
| 1.5751 | 4.0273 | 28 |
| 1.5660 | 4.0224 | 29 |
| 1.5574 | 4.0163 | 30 |
| 1.5491 | 4.0466 | 31 |
| 1.5413 | 4.0520 | 32 |
| 1.5342 | 4.0640 | 33 |
| 1.5270 | 4.0616 | 34 |
| 1.5199 | 4.0611 | 35 |
| 1.5133 | 4.0884 | 36 |
| 1.5073 | 4.0827 | 37 |
| 1.5015 | 4.0972 | 38 |
| 1.4962 | 4.0991 | 39 |
| 1.4908 | 4.0989 | 40 |
| 1.4858 | 4.1078 | 41 |
| 1.4814 | 4.1295 | 42 |
| 1.4773 | 4.1142 | 43 |
| 1.4730 | 4.1200 | 44 |
| 1.4699 | 4.1270 | 45 |
| 1.4664 | 4.1425 | 46 |
| 1.4637 | 4.1392 | 47 |
| 1.4612 | 4.1365 | 48 |
| 1.4591 | 4.1433 | 49 |
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.9.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Blerrrry/Kkk | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: other
tags:
- image-captioning
inference: false
languages:
- en
license: bsd-3-clause
datasets:
- ybelkada/football-dataset
---
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone) - and fine-tuned on
[football dataset](https://huggingface.co/datasets/ybelkada/football-dataset).
Google Colab notebook for fine-tuning: https://colab.research.google.com/drive/1lbqiSiA0sDF7JDWPeS0tccrM85LloVha?usp=sharing
|  |
|:--:|
| <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>|
## TL;DR
Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract:
*Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.*
## Usage
You can use this model for conditional and un-conditional image captioning
### Using the Pytorch model
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("ybelkada/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("ybelkada/blip-image-captioning-base")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# >>> a photography of a woman and her dog
# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> a woman sitting on the beach with her dog
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesfoce/blip-image-captioning-base").to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# >>> a photography of a woman and her dog
# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> a woman sitting on the beach with her dog
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
import torch
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16).to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# >>> a photography of a woman and her dog
# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> a woman sitting on the beach with her dog
```
</details>
## BibTex and citation info
```
@misc{https://doi.org/10.48550/arxiv.2201.12086,
doi = {10.48550/ARXIV.2201.12086},
url = {https://arxiv.org/abs/2201.12086},
author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` |
Branex/gpt-neo-2.7B | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
tags:
- art
- stable-diffusion
- Automatic1111
- .ckpt
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: false
---
Celeste is a general-purpose stable diffusion illustrations model. She seems to perform well with a smaller amount of prompts.

As it can be seen in the pictures below, she has been tested on a diverse variety of prompts, delivering quite good and fairly artistic results.
Illustration Examples:


Abstract Examples:


Anime Examples:


The example images have been generated using Celeste.ckpt at a low resolution and have been later upscaled using an upscaling algorithm since my
GPU does not support any resolutions above 700x700.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
BrianTin/MTBERT | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn ./config/ppo/SnowballTarget.yaml --run-id="SnowballTarget-v1" --resume
```
### Training hyperparameters
```python
behaviors:
SnowballTarget:
trainer_type: ppo
summary_freq: 10000
keep_checkpoints: 5
checkpoint_interval: 50000
max_steps: 900000
time_horizon: 128
threaded: true
hyperparameters:
learning_rate: 0.0001
learning_rate_schedule: linear
batch_size: 128
buffer_size: 4096
beta: 0.005
epsilon: 0.2
lambd: 0.95
num_epoch: 5
network_settings:
normalize: false
hidden_units: 256
num_layers: 3
vis_encode_type: simple
reward_signals:
extrinsic:
gamma: 0.99
strength: 1.0
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: kinkpunk/PPO-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Broadus20/DialoGPT-small-joshua | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.89
- name: F1
type: f1
value: 0.8896321070234113
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3255
- Accuracy: 0.89
- F1: 0.8896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Brokette/projetCS | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | # Kanade!
training:
- 250 images on yoisaki kanade, with wd1.4+booru tags, merged with other models
- 786 ARB; EMA; fp32; clip2
- 2e-6 CosineAnnealing
- augmentations: brightness/contrast/crop/flip
tested on: **clip1**, DDIM, 448x512 latent hires(2x), DDIM, step 25
keyword: `yoisaki kanade, 25-ji night code de. \(project sekai\)`
files:
- `knd_sd_e19_ema.ckpt`: crude DreamBooth file epoch 19, using evt-v4 base
- `knd_sd_e19_ema.ckpt`: same thing but epoch 29
- `KNDiffusion_fp32_no_vae.safetensors`: tuned model that slightly resembles kanade
- (KNDiffusion = phfa_knd29_evt4_030)
samples:
[image1](https://huggingface.co/trojblue/KNDiffusion/resolve/main/samples/00168-773909389-DDIM-step25-cfg6.5-phfa_knd29_evt4_030-fbf412b2-20230117_101156_902795.png)
```
yoisaki kanade, 25-ji night code de. \(project sekai\), 1girl, close-up, solo, long hair, headphones, blue eyes, jacket, looking at viewer, hair between eyes, shirt, long sleeves, blue jacket, collarbone, bangs, chair, sitting, track jacket, black shirt, grey jacket, grey shirt, indoors, open clothes, open jacket, open mouth, straight hair, upper body, very long hair, white hair, project sekai, highres
```
[image2](https://huggingface.co/trojblue/KNDiffusion/resolve/main/samples/00167-3301161699-DDIM-step25-cfg6.5-phfa_knd29_evt4_030-fbf412b2-20230117_100910_391039.png)
```
yoisaki kanade, 25-ji night code de. (project sekai), 1girl, solo, long hair, blue eyes, jacket, sleeves past wrists, very long hair, collarbone, white background, bangs, blush, blue jacket, hair between eyes, long sleeves, looking at viewer, sleeves past fingers, simple background, parted lips, open jacket, black shirt, shirt, open clothes, :o, cowboy shot, grey hair, hand up, o, project sekai, highres
```
sample configs:
```
Negative prompt: nsfw, text, error, signature, watermark, username, realistic,3d,(large breast), multiple people, animals, lowres, cropped, worth quality ,low quality, normal quality, jpeg artifacts, blurry, bad anatomy, bad hands, bad arms, bad feet, bad anatomy, missing fingers, extra digits, fewer digits, long neck, missing legs, huge person, optical_illusion
Steps: 25, Sampler: DDIM, CFG scale: 6.5, Seed: 773909389, Size: 448x512, Model: KNDiffusion_fp32_no_vae, Denoising strength: 0.7, ENSD: 31338, Hires upscale: 2, Hires upscaler: Latent (bicubic)
``` |
Brona/poc_de | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: jrauch4/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Bryan190/Aguy190 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-17T12:53:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-medium-finetuned-on-fleurs-ln_cd1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-finetuned-on-fleurs-ln_cd1
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the "google/fleurs" "ln_cd" subset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4483
- Wer: 14.7079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0528 | 4.78 | 1000 | 0.3612 | 17.4812 |
| 0.0013 | 9.57 | 2000 | 0.4214 | 15.7308 |
| 0.0003 | 14.35 | 3000 | 0.4423 | 14.8670 |
| 0.0002 | 19.14 | 4000 | 0.4483 | 14.7079 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.8.0
- Tokenizers 0.13.2
|
CALM/backup | [
"lean_albert",
"transformers"
]
| null | {
"architectures": [
"LeanAlbertForPretraining",
"LeanAlbertForTokenClassification",
"LeanAlbertForSequenceClassification"
],
"model_type": "lean_albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2023-01-17T13:25:17Z | ---
language:
- gos
---
A Gronings Wav2Vec2 model. This model is created by fine-tuning the multilingual XLS-R model that is [further pre-trained on Gronings speech](https://huggingface.co/bartelds/wav2vec2-xls-r-300m-gos).
This model is part of the paper: Making More of Little Data: Improving Low-Resource Automatic Speech Recognition Using Data Augmentation.
More information on [GitHub](https://github.com/Bartelds/asr-augmentation). |
CAMeL-Lab/bert-base-arabic-camelbert-ca-ner | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 85 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -605.09 +/- 85.59
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16,451 | 2023-01-17T13:30:26Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: tst-summarization
results:
- task:
name: Summarization
type: summarization
dataset:
name: samsum
type: samsum
config: samsum
split: train
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 44.9509
---
# tst-summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6972
- Rouge1: 44.9509
- Rouge2: 21.7162
- Rougel: 37.7582
- Rougelsum: 41.7239
- Gen Len: 22.7714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cpu
- Datasets 2.8.0
- Tokenizers 0.13.2
|
CAMeL-Lab/bert-base-arabic-camelbert-da-poetry | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 37 | null | ---
language:
- gos
---
A Gronings Wav2Vec2 model. This model is created by fine-tuning the multilingual XLS-R model that is [further pre-trained on Gronings speech](https://huggingface.co/bartelds/wav2vec2-xls-r-300m-gos).
This model is part of the paper: Making More of Little Data: Improving Low-Resource Automatic Speech Recognition Using Data Augmentation.
More information on [GitHub](https://github.com/Bartelds/asr-augmentation). |
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 32 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 651.50 +/- 351.88
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga css919 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga css919 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga css919
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 449 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### sfjssoiproto Dreambooth model trained by tytfyhutrf with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 62 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn ./config/ppo/PyramidsRND.yaml --run-id="Pyramids-Training" --resume
```
### Training hyperparameters
```python
behaviors:
Pyramids:
trainer_type: ppo
hyperparameters:
batch_size: 128
buffer_size: 4096
learning_rate: 0.0003
beta: 0.01
epsilon: 0.2
lambd: 0.95
num_epoch: 3
learning_rate_schedule: linear
network_settings:
normalize: false
hidden_units: 512
num_layers: 3
vis_encode_type: simple
reward_signals:
extrinsic:
gamma: 0.99
strength: 1.0
rnd:
gamma: 0.99
strength: 0.01
network_settings:
hidden_units: 128
num_layers: 3
learning_rate: 0.0001
keep_checkpoints: 5
max_steps: 900000
time_horizon: 256
summary_freq: 30000
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: kinkpunk/PPO-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,862 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="psalmodieur/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CL/safe-math-bot | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-17T14:57:49Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="SatishBethi/Q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CLAck/en-km | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"translation",
"autotrain_compatible"
]
| translation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | 2023-01-17T15:00:13Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: modelv3_WS_CV0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modelv3_WS_CV0
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3031
- Ame: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 25}
- Anguage: {'precision': 0.7777777777777778, 'recall': 0.8, 'f1': 0.7887323943661971, 'number': 35}
- Du Degree: {'precision': 0.8428571428571429, 'recall': 0.9076923076923077, 'f1': 0.8740740740740741, 'number': 65}
- Du End Date: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2}
- Du University: {'precision': 0.8421052631578947, 'recall': 0.8421052631578947, 'f1': 0.8421052631578947, 'number': 57}
- Ears Ex: {'precision': 0.7407407407407407, 'recall': 0.8333333333333334, 'f1': 0.7843137254901961, 'number': 24}
- Er Name: {'precision': 0.14285714285714285, 'recall': 0.5, 'f1': 0.22222222222222224, 'number': 2}
- Kill: {'precision': 0.9552715654952076, 'recall': 0.861671469740634, 'f1': 0.9060606060606061, 'number': 347}
- Ractice: {'precision': 0.5714285714285714, 'recall': 0.7741935483870968, 'f1': 0.6575342465753424, 'number': 31}
- Rade: {'precision': 0.7741935483870968, 'recall': 0.7741935483870968, 'f1': 0.7741935483870968, 'number': 31}
- Ummarize: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 398}
- Xpertise: {'precision': 0.9924812030075187, 'recall': 0.9705882352941176, 'f1': 0.9814126394052045, 'number': 136}
- X Company: {'precision': 0.9247311827956989, 'recall': 0.945054945054945, 'f1': 0.9347826086956522, 'number': 182}
- X Description: {'precision': 0.9616087751371115, 'recall': 0.9763988332007425, 'f1': 0.9689473684210527, 'number': 3771}
- X End Date: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}
- X Location: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5}
- X Position: {'precision': 0.8184615384615385, 'recall': 0.7430167597765364, 'f1': 0.7789165446559297, 'number': 358}
- X Start Date: {'precision': 0.75, 'recall': 1.0, 'f1': 0.8571428571428571, 'number': 6}
- Overall Precision: 0.9424
- Overall Recall: 0.9467
- Overall F1: 0.9445
- Overall Accuracy: 0.9357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Ame | Anguage | Du Degree | Du End Date | Du University | Ears Ex | Er Name | Kill | Ractice | Rade | Ummarize | Xpertise | X Company | X Description | X End Date | X Location | X Position | X Start Date | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------:|:---------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.2842 | 1.0 | 54 | 0.7912 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 25} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 35} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 57} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.9338235294117647, 'recall': 0.3659942363112392, 'f1': 0.525879917184265, 'number': 347} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 31} | {'precision': 0.7397003745318352, 'recall': 0.992462311557789, 'f1': 0.8476394849785408, 'number': 398} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 136} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 182} | {'precision': 0.8735822306238186, 'recall': 0.9803765579421904, 'f1': 0.9239035361739347, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.478134110787172, 'recall': 0.4581005586592179, 'f1': 0.4679029957203994, 'number': 358} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | 0.8357 | 0.8004 | 0.8176 | 0.7817 |
| 0.6213 | 2.0 | 108 | 0.5310 | {'precision': 0.1111111111111111, 'recall': 0.12, 'f1': 0.11538461538461538, 'number': 25} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 35} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.3391304347826087, 'recall': 0.6842105263157895, 'f1': 0.4534883720930233, 'number': 57} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.9134328358208955, 'recall': 0.8818443804034583, 'f1': 0.8973607038123167, 'number': 347} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 31} | {'precision': 0.989769820971867, 'recall': 0.9723618090452262, 'f1': 0.9809885931558936, 'number': 398} | {'precision': 0.8723404255319149, 'recall': 0.3014705882352941, 'f1': 0.44808743169398907, 'number': 136} | {'precision': 0.5555555555555556, 'recall': 0.24725274725274726, 'f1': 0.3422053231939164, 'number': 182} | {'precision': 0.9261983261476033, 'recall': 0.9684433837178468, 'f1': 0.9468498833290122, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.5303370786516854, 'recall': 0.659217877094972, 'f1': 0.5877957658779577, 'number': 358} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | 0.8738 | 0.8599 | 0.8668 | 0.8500 |
| 0.4583 | 3.0 | 162 | 0.4043 | {'precision': 0.16666666666666666, 'recall': 0.16, 'f1': 0.16326530612244897, 'number': 25} | {'precision': 0.5625, 'recall': 0.5142857142857142, 'f1': 0.5373134328358209, 'number': 35} | {'precision': 1.0, 'recall': 0.1076923076923077, 'f1': 0.19444444444444445, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.40860215053763443, 'recall': 0.6666666666666666, 'f1': 0.5066666666666667, 'number': 57} | {'precision': 0.8666666666666667, 'recall': 0.5416666666666666, 'f1': 0.6666666666666667, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.9164179104477612, 'recall': 0.8847262247838616, 'f1': 0.9002932551319648, 'number': 347} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 31} | {'precision': 0.9875930521091811, 'recall': 1.0, 'f1': 0.9937578027465668, 'number': 398} | {'precision': 0.9333333333333333, 'recall': 0.4117647058823529, 'f1': 0.5714285714285713, 'number': 136} | {'precision': 0.6437246963562753, 'recall': 0.8736263736263736, 'f1': 0.7412587412587412, 'number': 182} | {'precision': 0.9511426319936959, 'recall': 0.960222752585521, 'f1': 0.9556611243072051, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.6728110599078341, 'recall': 0.8156424581005587, 'f1': 0.7373737373737372, 'number': 358} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | 0.9003 | 0.8972 | 0.8987 | 0.8890 |
| 0.3312 | 4.0 | 216 | 0.3097 | {'precision': 0.2962962962962963, 'recall': 0.32, 'f1': 0.30769230769230765, 'number': 25} | {'precision': 0.575, 'recall': 0.6571428571428571, 'f1': 0.6133333333333333, 'number': 35} | {'precision': 0.5092592592592593, 'recall': 0.8461538461538461, 'f1': 0.6358381502890174, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.7142857142857143, 'recall': 0.2631578947368421, 'f1': 0.3846153846153846, 'number': 57} | {'precision': 0.8125, 'recall': 0.5416666666666666, 'f1': 0.65, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.9591836734693877, 'recall': 0.8126801152737753, 'f1': 0.8798751950078003, 'number': 347} | {'precision': 0.36666666666666664, 'recall': 0.7096774193548387, 'f1': 0.48351648351648346, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 31} | {'precision': 0.9707317073170731, 'recall': 1.0, 'f1': 0.9851485148514851, 'number': 398} | {'precision': 0.825, 'recall': 0.7279411764705882, 'f1': 0.7734375, 'number': 136} | {'precision': 0.7589285714285714, 'recall': 0.9340659340659341, 'f1': 0.8374384236453202, 'number': 182} | {'precision': 0.9671122994652407, 'recall': 0.9591620259878016, 'f1': 0.9631207562242046, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.7210401891252955, 'recall': 0.8519553072625698, 'f1': 0.7810499359795134, 'number': 358} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | 0.9132 | 0.9144 | 0.9138 | 0.9069 |
| 0.266 | 5.0 | 270 | 0.3171 | {'precision': 0.7692307692307693, 'recall': 0.8, 'f1': 0.7843137254901961, 'number': 25} | {'precision': 0.6052631578947368, 'recall': 0.6571428571428571, 'f1': 0.6301369863013698, 'number': 35} | {'precision': 0.5384615384615384, 'recall': 0.2153846153846154, 'f1': 0.3076923076923077, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.45689655172413796, 'recall': 0.9298245614035088, 'f1': 0.6127167630057804, 'number': 57} | {'precision': 0.6842105263157895, 'recall': 0.5416666666666666, 'f1': 0.6046511627906976, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.9713375796178344, 'recall': 0.8789625360230547, 'f1': 0.9228441754916793, 'number': 347} | {'precision': 0.48717948717948717, 'recall': 0.6129032258064516, 'f1': 0.5428571428571428, 'number': 31} | {'precision': 0.8333333333333334, 'recall': 0.3225806451612903, 'f1': 0.4651162790697674, 'number': 31} | {'precision': 0.9851485148514851, 'recall': 1.0, 'f1': 0.9925187032418953, 'number': 398} | {'precision': 0.8424657534246576, 'recall': 0.9044117647058824, 'f1': 0.8723404255319149, 'number': 136} | {'precision': 0.8870967741935484, 'recall': 0.9065934065934066, 'f1': 0.8967391304347826, 'number': 182} | {'precision': 0.9295460183577277, 'recall': 0.9936356404136834, 'f1': 0.9605229428351705, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.9631901840490797, 'recall': 0.43854748603351956, 'f1': 0.6026871401151631, 'number': 358} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | 0.9143 | 0.9217 | 0.9180 | 0.9106 |
| 0.2304 | 6.0 | 324 | 0.2539 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 25} | {'precision': 0.38636363636363635, 'recall': 0.4857142857142857, 'f1': 0.4303797468354431, 'number': 35} | {'precision': 0.6354166666666666, 'recall': 0.9384615384615385, 'f1': 0.7577639751552795, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.717391304347826, 'recall': 0.5789473684210527, 'f1': 0.6407766990291262, 'number': 57} | {'precision': 0.625, 'recall': 0.625, 'f1': 0.625, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.9020172910662824, 'recall': 0.9020172910662824, 'f1': 0.9020172910662824, 'number': 347} | {'precision': 0.46, 'recall': 0.7419354838709677, 'f1': 0.5679012345679013, 'number': 31} | {'precision': 0.782608695652174, 'recall': 0.5806451612903226, 'f1': 0.6666666666666667, 'number': 31} | {'precision': 0.9875930521091811, 'recall': 1.0, 'f1': 0.9937578027465668, 'number': 398} | {'precision': 0.8881118881118881, 'recall': 0.9338235294117647, 'f1': 0.910394265232975, 'number': 136} | {'precision': 0.8054298642533937, 'recall': 0.978021978021978, 'f1': 0.8833746898263027, 'number': 182} | {'precision': 0.9553043924993578, 'recall': 0.9862105542296473, 'f1': 0.9705114822546973, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.8798701298701299, 'recall': 0.7569832402234636, 'f1': 0.8138138138138139, 'number': 358} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | 0.9244 | 0.9492 | 0.9367 | 0.9245 |
| 0.1939 | 7.0 | 378 | 0.2805 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 25} | {'precision': 0.7777777777777778, 'recall': 0.8, 'f1': 0.7887323943661971, 'number': 35} | {'precision': 0.6354166666666666, 'recall': 0.9384615384615385, 'f1': 0.7577639751552795, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.75, 'recall': 0.631578947368421, 'f1': 0.6857142857142857, 'number': 57} | {'precision': 0.8148148148148148, 'recall': 0.9166666666666666, 'f1': 0.8627450980392156, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.976027397260274, 'recall': 0.8213256484149856, 'f1': 0.892018779342723, 'number': 347} | {'precision': 0.37142857142857144, 'recall': 0.8387096774193549, 'f1': 0.5148514851485149, 'number': 31} | {'precision': 0.6923076923076923, 'recall': 0.5806451612903226, 'f1': 0.631578947368421, 'number': 31} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 398} | {'precision': 0.9, 'recall': 0.7941176470588235, 'f1': 0.84375, 'number': 136} | {'precision': 0.9015544041450777, 'recall': 0.9560439560439561, 'f1': 0.928, 'number': 182} | {'precision': 0.9773828756058158, 'recall': 0.9626093874303898, 'f1': 0.9699398797595191, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.7713004484304933, 'recall': 0.9608938547486033, 'f1': 0.8557213930348258, 'number': 358} | {'precision': 1.0, 'recall': 0.16666666666666666, 'f1': 0.2857142857142857, 'number': 6} | 0.9386 | 0.9416 | 0.9401 | 0.9318 |
| 0.1685 | 8.0 | 432 | 0.2443 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 25} | {'precision': 0.6666666666666666, 'recall': 0.7428571428571429, 'f1': 0.7027027027027027, 'number': 35} | {'precision': 0.6590909090909091, 'recall': 0.8923076923076924, 'f1': 0.7581699346405228, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.85, 'recall': 0.5964912280701754, 'f1': 0.7010309278350515, 'number': 57} | {'precision': 0.8333333333333334, 'recall': 0.8333333333333334, 'f1': 0.8333333333333334, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.9405940594059405, 'recall': 0.8213256484149856, 'f1': 0.8769230769230769, 'number': 347} | {'precision': 0.43103448275862066, 'recall': 0.8064516129032258, 'f1': 0.5617977528089887, 'number': 31} | {'precision': 0.75, 'recall': 0.5806451612903226, 'f1': 0.6545454545454547, 'number': 31} | {'precision': 0.9875930521091811, 'recall': 1.0, 'f1': 0.9937578027465668, 'number': 398} | {'precision': 0.8837209302325582, 'recall': 0.8382352941176471, 'f1': 0.8603773584905661, 'number': 136} | {'precision': 0.921875, 'recall': 0.9725274725274725, 'f1': 0.9465240641711229, 'number': 182} | {'precision': 0.9568921011874032, 'recall': 0.983028374436489, 'f1': 0.9697841726618706, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.8523489932885906, 'recall': 0.7094972067039106, 'f1': 0.7743902439024389, 'number': 358} | {'precision': 0.8571428571428571, 'recall': 1.0, 'f1': 0.923076923076923, 'number': 6} | 0.9346 | 0.9399 | 0.9373 | 0.9321 |
| 0.1354 | 9.0 | 486 | 0.2301 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 25} | {'precision': 0.6944444444444444, 'recall': 0.7142857142857143, 'f1': 0.7042253521126761, 'number': 35} | {'precision': 0.8732394366197183, 'recall': 0.9538461538461539, 'f1': 0.9117647058823529, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.8305084745762712, 'recall': 0.8596491228070176, 'f1': 0.8448275862068966, 'number': 57} | {'precision': 0.6923076923076923, 'recall': 0.75, 'f1': 0.7199999999999999, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.8979591836734694, 'recall': 0.8876080691642652, 'f1': 0.8927536231884057, 'number': 347} | {'precision': 0.4528301886792453, 'recall': 0.7741935483870968, 'f1': 0.5714285714285714, 'number': 31} | {'precision': 0.75, 'recall': 0.5806451612903226, 'f1': 0.6545454545454547, 'number': 31} | {'precision': 0.995, 'recall': 1.0, 'f1': 0.9974937343358395, 'number': 398} | {'precision': 0.9357142857142857, 'recall': 0.9632352941176471, 'f1': 0.9492753623188407, 'number': 136} | {'precision': 0.9308510638297872, 'recall': 0.9615384615384616, 'f1': 0.9459459459459459, 'number': 182} | {'precision': 0.9598435462842243, 'recall': 0.9761336515513126, 'f1': 0.9679200631080727, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.7861271676300579, 'recall': 0.7597765363128491, 'f1': 0.7727272727272726, 'number': 358} | {'precision': 0.8571428571428571, 'recall': 1.0, 'f1': 0.923076923076923, 'number': 6} | 0.9340 | 0.9481 | 0.9410 | 0.9329 |
| 0.1156 | 10.0 | 540 | 0.2534 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 25} | {'precision': 0.7837837837837838, 'recall': 0.8285714285714286, 'f1': 0.8055555555555555, 'number': 35} | {'precision': 0.8571428571428571, 'recall': 0.9230769230769231, 'f1': 0.888888888888889, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.8448275862068966, 'recall': 0.8596491228070176, 'f1': 0.8521739130434783, 'number': 57} | {'precision': 0.84, 'recall': 0.875, 'f1': 0.8571428571428572, 'number': 24} | {'precision': 0.09090909090909091, 'recall': 0.5, 'f1': 0.15384615384615385, 'number': 2} | {'precision': 0.9777070063694268, 'recall': 0.8847262247838616, 'f1': 0.9288956127080182, 'number': 347} | {'precision': 0.43333333333333335, 'recall': 0.8387096774193549, 'f1': 0.5714285714285715, 'number': 31} | {'precision': 0.6571428571428571, 'recall': 0.7419354838709677, 'f1': 0.6969696969696969, 'number': 31} | {'precision': 1.0, 'recall': 0.9974874371859297, 'f1': 0.9987421383647799, 'number': 398} | {'precision': 0.9923664122137404, 'recall': 0.9558823529411765, 'f1': 0.9737827715355806, 'number': 136} | {'precision': 0.8682926829268293, 'recall': 0.978021978021978, 'f1': 0.9198966408268734, 'number': 182} | {'precision': 0.968503937007874, 'recall': 0.9785202863961814, 'f1': 0.9734863474475662, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.8407079646017699, 'recall': 0.7960893854748603, 'f1': 0.8177905308464849, 'number': 358} | {'precision': 0.8571428571428571, 'recall': 1.0, 'f1': 0.923076923076923, 'number': 6} | 0.9462 | 0.9545 | 0.9504 | 0.9424 |
| 0.1287 | 11.0 | 594 | 0.2975 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 25} | {'precision': 0.7368421052631579, 'recall': 0.8, 'f1': 0.7671232876712328, 'number': 35} | {'precision': 0.8823529411764706, 'recall': 0.9230769230769231, 'f1': 0.9022556390977443, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.8771929824561403, 'recall': 0.8771929824561403, 'f1': 0.8771929824561403, 'number': 57} | {'precision': 0.7407407407407407, 'recall': 0.8333333333333334, 'f1': 0.7843137254901961, 'number': 24} | {'precision': 0.125, 'recall': 0.5, 'f1': 0.2, 'number': 2} | {'precision': 1.0, 'recall': 0.8097982708933718, 'f1': 0.8949044585987261, 'number': 347} | {'precision': 0.5116279069767442, 'recall': 0.7096774193548387, 'f1': 0.5945945945945946, 'number': 31} | {'precision': 0.75, 'recall': 0.5806451612903226, 'f1': 0.6545454545454547, 'number': 31} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 398} | {'precision': 0.9776119402985075, 'recall': 0.9632352941176471, 'f1': 0.9703703703703703, 'number': 136} | {'precision': 0.917098445595855, 'recall': 0.9725274725274725, 'f1': 0.9440000000000001, 'number': 182} | {'precision': 0.95190329218107, 'recall': 0.9814372845399099, 'f1': 0.9664447055751405, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.88671875, 'recall': 0.6340782122905028, 'f1': 0.7394136807817591, 'number': 358} | {'precision': 0.8571428571428571, 'recall': 1.0, 'f1': 0.923076923076923, 'number': 6} | 0.9446 | 0.9396 | 0.9420 | 0.9361 |
| 0.0997 | 12.0 | 648 | 0.2766 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 25} | {'precision': 0.6052631578947368, 'recall': 0.6571428571428571, 'f1': 0.6301369863013698, 'number': 35} | {'precision': 0.8571428571428571, 'recall': 0.9230769230769231, 'f1': 0.888888888888889, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.8596491228070176, 'recall': 0.8596491228070176, 'f1': 0.8596491228070176, 'number': 57} | {'precision': 0.75, 'recall': 0.875, 'f1': 0.8076923076923077, 'number': 24} | {'precision': 0.1111111111111111, 'recall': 0.5, 'f1': 0.1818181818181818, 'number': 2} | {'precision': 0.9355828220858896, 'recall': 0.8789625360230547, 'f1': 0.9063893016344725, 'number': 347} | {'precision': 0.45614035087719296, 'recall': 0.8387096774193549, 'f1': 0.5909090909090909, 'number': 31} | {'precision': 0.7666666666666667, 'recall': 0.7419354838709677, 'f1': 0.7540983606557377, 'number': 31} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 398} | {'precision': 0.9924812030075187, 'recall': 0.9705882352941176, 'f1': 0.9814126394052045, 'number': 136} | {'precision': 0.9202127659574468, 'recall': 0.9505494505494505, 'f1': 0.9351351351351351, 'number': 182} | {'precision': 0.9635839664658108, 'recall': 0.9753381066030231, 'f1': 0.9694254085397996, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.825, 'recall': 0.7374301675977654, 'f1': 0.7787610619469025, 'number': 358} | {'precision': 0.8571428571428571, 'recall': 1.0, 'f1': 0.923076923076923, 'number': 6} | 0.9420 | 0.9467 | 0.9443 | 0.9361 |
| 0.0772 | 13.0 | 702 | 0.2778 | {'precision': 0.9615384615384616, 'recall': 1.0, 'f1': 0.9803921568627451, 'number': 25} | {'precision': 0.7105263157894737, 'recall': 0.7714285714285715, 'f1': 0.7397260273972601, 'number': 35} | {'precision': 0.890625, 'recall': 0.8769230769230769, 'f1': 0.883720930232558, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.796875, 'recall': 0.8947368421052632, 'f1': 0.8429752066115702, 'number': 57} | {'precision': 0.7777777777777778, 'recall': 0.875, 'f1': 0.823529411764706, 'number': 24} | {'precision': 0.125, 'recall': 0.5, 'f1': 0.2, 'number': 2} | {'precision': 0.964968152866242, 'recall': 0.8731988472622478, 'f1': 0.9167927382753402, 'number': 347} | {'precision': 0.4791666666666667, 'recall': 0.7419354838709677, 'f1': 0.5822784810126582, 'number': 31} | {'precision': 0.7, 'recall': 0.6774193548387096, 'f1': 0.6885245901639343, 'number': 31} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 398} | {'precision': 0.9924812030075187, 'recall': 0.9705882352941176, 'f1': 0.9814126394052045, 'number': 136} | {'precision': 0.921875, 'recall': 0.9725274725274725, 'f1': 0.9465240641711229, 'number': 182} | {'precision': 0.96700706991359, 'recall': 0.979315831344471, 'f1': 0.9731225296442688, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.8466076696165191, 'recall': 0.8016759776536313, 'f1': 0.8235294117647058, 'number': 358} | {'precision': 0.8571428571428571, 'recall': 1.0, 'f1': 0.923076923076923, 'number': 6} | 0.9470 | 0.9536 | 0.9503 | 0.9422 |
| 0.0659 | 14.0 | 756 | 0.3008 | {'precision': 0.9615384615384616, 'recall': 1.0, 'f1': 0.9803921568627451, 'number': 25} | {'precision': 0.7027027027027027, 'recall': 0.7428571428571429, 'f1': 0.7222222222222223, 'number': 35} | {'precision': 0.8714285714285714, 'recall': 0.9384615384615385, 'f1': 0.9037037037037037, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.7936507936507936, 'recall': 0.8771929824561403, 'f1': 0.8333333333333334, 'number': 57} | {'precision': 0.75, 'recall': 0.875, 'f1': 0.8076923076923077, 'number': 24} | {'precision': 0.1, 'recall': 0.5, 'f1': 0.16666666666666669, 'number': 2} | {'precision': 0.9652777777777778, 'recall': 0.8011527377521613, 'f1': 0.8755905511811023, 'number': 347} | {'precision': 0.4727272727272727, 'recall': 0.8387096774193549, 'f1': 0.6046511627906976, 'number': 31} | {'precision': 0.6774193548387096, 'recall': 0.6774193548387096, 'f1': 0.6774193548387096, 'number': 31} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 398} | {'precision': 0.9041095890410958, 'recall': 0.9705882352941176, 'f1': 0.9361702127659575, 'number': 136} | {'precision': 0.9206349206349206, 'recall': 0.9560439560439561, 'f1': 0.9380053908355795, 'number': 182} | {'precision': 0.9526152252718798, 'recall': 0.9756032882524529, 'f1': 0.9639722258613914, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.8231292517006803, 'recall': 0.6759776536312849, 'f1': 0.7423312883435583, 'number': 358} | {'precision': 0.75, 'recall': 1.0, 'f1': 0.8571428571428571, 'number': 6} | 0.9323 | 0.9386 | 0.9355 | 0.9294 |
| 0.0541 | 15.0 | 810 | 0.3076 | {'precision': 0.9615384615384616, 'recall': 1.0, 'f1': 0.9803921568627451, 'number': 25} | {'precision': 0.6666666666666666, 'recall': 0.7428571428571429, 'f1': 0.7027027027027027, 'number': 35} | {'precision': 0.8787878787878788, 'recall': 0.8923076923076924, 'f1': 0.8854961832061069, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.819672131147541, 'recall': 0.8771929824561403, 'f1': 0.8474576271186439, 'number': 57} | {'precision': 0.7777777777777778, 'recall': 0.875, 'f1': 0.823529411764706, 'number': 24} | {'precision': 0.125, 'recall': 0.5, 'f1': 0.2, 'number': 2} | {'precision': 0.9966996699669967, 'recall': 0.8703170028818443, 'f1': 0.9292307692307691, 'number': 347} | {'precision': 0.52, 'recall': 0.8387096774193549, 'f1': 0.6419753086419753, 'number': 31} | {'precision': 0.7407407407407407, 'recall': 0.6451612903225806, 'f1': 0.689655172413793, 'number': 31} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 398} | {'precision': 0.9777777777777777, 'recall': 0.9705882352941176, 'f1': 0.974169741697417, 'number': 136} | {'precision': 0.9259259259259259, 'recall': 0.9615384615384616, 'f1': 0.9433962264150944, 'number': 182} | {'precision': 0.9615485221030604, 'recall': 0.9748077433041633, 'f1': 0.9681327363708191, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.8403908794788274, 'recall': 0.7206703910614525, 'f1': 0.77593984962406, 'number': 358} | {'precision': 0.8571428571428571, 'recall': 1.0, 'f1': 0.923076923076923, 'number': 6} | 0.9452 | 0.9449 | 0.9450 | 0.9375 |
| 0.0518 | 16.0 | 864 | 0.3259 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 25} | {'precision': 0.7027027027027027, 'recall': 0.7428571428571429, 'f1': 0.7222222222222223, 'number': 35} | {'precision': 0.8450704225352113, 'recall': 0.9230769230769231, 'f1': 0.8823529411764706, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.8571428571428571, 'recall': 0.8421052631578947, 'f1': 0.8495575221238938, 'number': 57} | {'precision': 0.7407407407407407, 'recall': 0.8333333333333334, 'f1': 0.7843137254901961, 'number': 24} | {'precision': 0.125, 'recall': 0.5, 'f1': 0.2, 'number': 2} | {'precision': 0.9451219512195121, 'recall': 0.8933717579250721, 'f1': 0.9185185185185186, 'number': 347} | {'precision': 0.5416666666666666, 'recall': 0.8387096774193549, 'f1': 0.6582278481012658, 'number': 31} | {'precision': 0.6756756756756757, 'recall': 0.8064516129032258, 'f1': 0.7352941176470588, 'number': 31} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 398} | {'precision': 0.7764705882352941, 'recall': 0.9705882352941176, 'f1': 0.8627450980392157, 'number': 136} | {'precision': 0.927461139896373, 'recall': 0.9835164835164835, 'f1': 0.9546666666666668, 'number': 182} | {'precision': 0.9572178477690289, 'recall': 0.9671174754706974, 'f1': 0.9621421975992612, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.8391608391608392, 'recall': 0.6703910614525139, 'f1': 0.7453416149068324, 'number': 358} | {'precision': 0.8571428571428571, 'recall': 1.0, 'f1': 0.923076923076923, 'number': 6} | 0.9325 | 0.9392 | 0.9359 | 0.9272 |
| 0.0439 | 17.0 | 918 | 0.3117 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 25} | {'precision': 0.6842105263157895, 'recall': 0.7428571428571429, 'f1': 0.7123287671232877, 'number': 35} | {'precision': 0.855072463768116, 'recall': 0.9076923076923077, 'f1': 0.8805970149253731, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.8448275862068966, 'recall': 0.8596491228070176, 'f1': 0.8521739130434783, 'number': 57} | {'precision': 0.7777777777777778, 'recall': 0.875, 'f1': 0.823529411764706, 'number': 24} | {'precision': 0.125, 'recall': 0.5, 'f1': 0.2, 'number': 2} | {'precision': 0.9765886287625418, 'recall': 0.8414985590778098, 'f1': 0.9040247678018576, 'number': 347} | {'precision': 0.575, 'recall': 0.7419354838709677, 'f1': 0.6478873239436619, 'number': 31} | {'precision': 0.7586206896551724, 'recall': 0.7096774193548387, 'f1': 0.7333333333333333, 'number': 31} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 398} | {'precision': 0.8918918918918919, 'recall': 0.9705882352941176, 'f1': 0.9295774647887325, 'number': 136} | {'precision': 0.9114583333333334, 'recall': 0.9615384615384616, 'f1': 0.9358288770053476, 'number': 182} | {'precision': 0.9597069597069597, 'recall': 0.9726862901087244, 'f1': 0.9661530356907678, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.8360128617363344, 'recall': 0.7262569832402235, 'f1': 0.7772795216741405, 'number': 358} | {'precision': 0.75, 'recall': 1.0, 'f1': 0.8571428571428571, 'number': 6} | 0.9400 | 0.9417 | 0.9409 | 0.9329 |
| 0.041 | 18.0 | 972 | 0.3037 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 25} | {'precision': 0.7777777777777778, 'recall': 0.8, 'f1': 0.7887323943661971, 'number': 35} | {'precision': 0.8805970149253731, 'recall': 0.9076923076923077, 'f1': 0.8939393939393939, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.8333333333333334, 'recall': 0.8771929824561403, 'f1': 0.8547008547008548, 'number': 57} | {'precision': 0.7407407407407407, 'recall': 0.8333333333333334, 'f1': 0.7843137254901961, 'number': 24} | {'precision': 0.16666666666666666, 'recall': 0.5, 'f1': 0.25, 'number': 2} | {'precision': 0.9674267100977199, 'recall': 0.8559077809798271, 'f1': 0.908256880733945, 'number': 347} | {'precision': 0.5897435897435898, 'recall': 0.7419354838709677, 'f1': 0.6571428571428573, 'number': 31} | {'precision': 0.7666666666666667, 'recall': 0.7419354838709677, 'f1': 0.7540983606557377, 'number': 31} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 398} | {'precision': 0.9924812030075187, 'recall': 0.9705882352941176, 'f1': 0.9814126394052045, 'number': 136} | {'precision': 0.9259259259259259, 'recall': 0.9615384615384616, 'f1': 0.9433962264150944, 'number': 182} | {'precision': 0.9615987460815048, 'recall': 0.9761336515513126, 'f1': 0.9688116857481247, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.8204334365325078, 'recall': 0.7402234636871509, 'f1': 0.7782672540381792, 'number': 358} | {'precision': 0.75, 'recall': 1.0, 'f1': 0.8571428571428571, 'number': 6} | 0.9441 | 0.9465 | 0.9453 | 0.9368 |
| 0.0392 | 19.0 | 1026 | 0.2975 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 25} | {'precision': 0.7777777777777778, 'recall': 0.8, 'f1': 0.7887323943661971, 'number': 35} | {'precision': 0.8450704225352113, 'recall': 0.9230769230769231, 'f1': 0.8823529411764706, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.8571428571428571, 'recall': 0.8421052631578947, 'f1': 0.8495575221238938, 'number': 57} | {'precision': 0.7407407407407407, 'recall': 0.8333333333333334, 'f1': 0.7843137254901961, 'number': 24} | {'precision': 0.16666666666666666, 'recall': 0.5, 'f1': 0.25, 'number': 2} | {'precision': 0.9467084639498433, 'recall': 0.8703170028818443, 'f1': 0.9069069069069069, 'number': 347} | {'precision': 0.5897435897435898, 'recall': 0.7419354838709677, 'f1': 0.6571428571428573, 'number': 31} | {'precision': 0.7741935483870968, 'recall': 0.7741935483870968, 'f1': 0.7741935483870968, 'number': 31} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 398} | {'precision': 0.9924812030075187, 'recall': 0.9705882352941176, 'f1': 0.9814126394052045, 'number': 136} | {'precision': 0.9247311827956989, 'recall': 0.945054945054945, 'f1': 0.9347826086956522, 'number': 182} | {'precision': 0.9623332461417735, 'recall': 0.9756032882524529, 'f1': 0.9689228338161707, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.8184615384615385, 'recall': 0.7430167597765364, 'f1': 0.7789165446559297, 'number': 358} | {'precision': 0.75, 'recall': 1.0, 'f1': 0.8571428571428571, 'number': 6} | 0.9431 | 0.9467 | 0.9449 | 0.9357 |
| 0.0361 | 20.0 | 1080 | 0.3031 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 25} | {'precision': 0.7777777777777778, 'recall': 0.8, 'f1': 0.7887323943661971, 'number': 35} | {'precision': 0.8428571428571429, 'recall': 0.9076923076923077, 'f1': 0.8740740740740741, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.8421052631578947, 'recall': 0.8421052631578947, 'f1': 0.8421052631578947, 'number': 57} | {'precision': 0.7407407407407407, 'recall': 0.8333333333333334, 'f1': 0.7843137254901961, 'number': 24} | {'precision': 0.14285714285714285, 'recall': 0.5, 'f1': 0.22222222222222224, 'number': 2} | {'precision': 0.9552715654952076, 'recall': 0.861671469740634, 'f1': 0.9060606060606061, 'number': 347} | {'precision': 0.5714285714285714, 'recall': 0.7741935483870968, 'f1': 0.6575342465753424, 'number': 31} | {'precision': 0.7741935483870968, 'recall': 0.7741935483870968, 'f1': 0.7741935483870968, 'number': 31} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 398} | {'precision': 0.9924812030075187, 'recall': 0.9705882352941176, 'f1': 0.9814126394052045, 'number': 136} | {'precision': 0.9247311827956989, 'recall': 0.945054945054945, 'f1': 0.9347826086956522, 'number': 182} | {'precision': 0.9616087751371115, 'recall': 0.9763988332007425, 'f1': 0.9689473684210527, 'number': 3771} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.8184615384615385, 'recall': 0.7430167597765364, 'f1': 0.7789165446559297, 'number': 358} | {'precision': 0.75, 'recall': 1.0, 'f1': 0.8571428571428571, 'number': 6} | 0.9424 | 0.9467 | 0.9445 | 0.9357 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
CLAck/en-vi | [
"pytorch",
"marian",
"text2text-generation",
"en",
"vi",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="SatishBethi/Q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CLAck/indo-mixed | [
"pytorch",
"marian",
"text2text-generation",
"en",
"id",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/ual/1673970310616/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1542798774455648256/ntKA4v3U_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">University of the Arts London</div>
<div style="text-align: center; font-size: 14px;">@ual</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from University of the Arts London.
| Data | University of the Arts London |
| --- | --- |
| Tweets downloaded | 3229 |
| Retweets | 755 |
| Short tweets | 30 |
| Tweets kept | 2444 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/qedyahme/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ual's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/c9jdog4e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/c9jdog4e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ual')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
CLAck/indo-pure | [
"pytorch",
"marian",
"text2text-generation",
"en",
"id",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: mit
---
### chukotka on Stable Diffusion
This is the `<chukotka>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:







|
CLEE/CLEE | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.94 +/- 24.92
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CLTL/gm-ner-xlmrbase | [
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"nl",
"transformers",
"dighum",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | 2023-01-17T15:17:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Vin2-P3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Vin2-P3
This model is a fine-tuned version of [HuyenNguyen/Vin1-P3](https://huggingface.co/HuyenNguyen/Vin1-P3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2714
- Wer: 13.9954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 900
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3479 | 0.77 | 300 | 0.2915 | 15.5299 |
| 0.2372 | 1.54 | 600 | 0.2817 | 15.3866 |
| 0.128 | 2.31 | 900 | 0.2714 | 13.9954 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
CLTL/icf-levels-adm | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- germanquad
model-index:
- name: gbert-base_QA
results: []
widget:
- text: "welchen Vertrag oder welche Art von Anstellung?"
context: "Ihre Aufgaben als Fachplaner Gebäudeausrüstung (m/w/d):\n• Ihre Hauptaufgabe wird die eigenverantwortlich Zeichnungserstellung in verschiedenen Bereichen der TGA sein\n• Sie erstellen ebenfalls die benötigten Leistungsverzeichnisse sowie die Plänen und Schemata im Bereich der TGA\n• Sie begleiten und betreuen die abwechslungsreichen Projekte von der Planung bis zu Ausführung\n• Mit Ihr Fachwissen führen Sie die Prüfung von Dokumentationen, Angeboten-, Nachträgen- und Rechnungen durch\n• Die Koordination und Abstimmung der Werk- und Montageplanung mit den externen Bau- und Planungsbeteiligten rundet Ihr Aufgabengebiet gekonnt ab\nIhr Profil als Fachplaner Gebäudeausrüstung (m/w/d)\n• Abgeschlossenes Studium oder Technikerausbildung in der Fachrichtung Versorgungstechnik, Gebäudetechnik, Heizung-Lüftung-Sanitär-Klima (HLSK) oder vergleichbare Ausbildung/Berufserfahrung\n• Sie können gute Kenntnisse mit Plancal Nova und/oder Autocad MEP vorweisen\n• Sie haben Kenntnisse im Bereich aller Leistungsphasen der HOAI\nWas wir bieten?\n• Flexible und individuelle Arbeitszeitmodelle\n• Übertarifliche Konditionen (bei Gehalt und Urlaub und vermögenswirksame Leistungen)\n• Unbefristetes Arbeitsverhältnis\n• Individuelle Fort- und Weiterbildungsmöglichkeiten\n• #premium\n\nEine Stellenanzeige von Leasotec GmbH"
example_title: "Contract-1"
- text: "welchen Vertrag oder welche Art von Anstellung?"
context: "Du suchst neue Herausforderungen?\n\nWir suchen Dich – einen kreativen Denker mit Lösungsansätzen für verschiedene Aufgaben.\n\nELEKTRIKER – VOLLZEIT, DEUTSCHLANDWEITE MONTAGE, AUSLANDSEINSÄTZE MÖGLICH\n\nWir sind ein Messe- und Ladenbauunternehmen, das seit 2001 erfolgreich auf dem europäischen Markt etabliert ist. Unsere Mitarbeiter montieren europaweit Messestände, Eventlandschaften, Ausstellungsräume und Shops für unsere Kunden. Gestalte mit uns die Zukunft und verstärke unser Team für die nächsten Projekte!\n\nWir suchen zum nächstmöglichen Zeitpunkt einen erfahrenen und zuverlässigen Elektriker zur Festanstellung.\n\nAufgaben\n\n• Auf- und Abbau von Messeständen, insbesondere elektrischer Bestandteile\n\n• Kabelverlegung und Installation nach Plänen\n\n• Umsetzung von Schaltplänen\n\n• Unterstützung des Montageteams\n\nQualifikation\n\n• Abgeschlossene Berufsausbildung zum Elektriker\n\n• Reisebereitschaft und Bereitschaft zur Wochenendarbeit\n\n• Technisches und handwerkliches Verständnis\n\n• Sicheres Lesen von Elektroplänen\n\n• Selbständige, zuverlässige und saubere Arbeitsweise\n\n• Englischkenntnisse von Vorteil, aber nicht zwingend notwendig\n\n• Sympathisches und sicheres Auftreten\n\n• Teamfähigkeit\n\n• Führerschein Klasse B von Vorteil\n\nBenefits\n\n• Umfassende Einarbeitung sowie Unterstützung durch erfahrene Kollegen\n\n• Raum für Mitgestaltung und eine teamorientierte Arbeitsatmosphäre\n\n• Persönliche und fachliche Weiterentwicklung nach Ihren individuellen Bedürfnissen\n\n• Leistungsgerechte Vergütung\n\n• Unbefristeter sicherer Arbeitsplatz\n\n• Freiwillige soziale Leistungen: betriebliche Altersvorsorge, Mitarbeiterrabatte,\n\nGetränke, betriebliches Gesundheitsmanagement, Teamevents\n\nWenn du Deine Zukunft mit uns gestalten möchtest, freuen wir uns Dich kennenzulernen.\n\nBitte schicke Deine Bewerbung unter Angabe Deiner Gehaltsvorstellungen."
example_title: "Contract-2"
- text: "Welche Ausbildung braucht man für den Job?"
context: "Du suchst neue Herausforderungen?\n\nWir suchen Dich – einen kreativen Denker mit Lösungsansätzen für verschiedene Aufgaben.\n\nELEKTRIKER – VOLLZEIT, DEUTSCHLANDWEITE MONTAGE, AUSLANDSEINSÄTZE MÖGLICH\n\nWir sind ein Messe- und Ladenbauunternehmen, das seit 2001 erfolgreich auf dem europäischen Markt etabliert ist. Unsere Mitarbeiter montieren europaweit Messestände, Eventlandschaften, Ausstellungsräume und Shops für unsere Kunden. Gestalte mit uns die Zukunft und verstärke unser Team für die nächsten Projekte!\n\nWir suchen zum nächstmöglichen Zeitpunkt einen erfahrenen und zuverlässigen Elektriker zur Festanstellung.\n\nAufgaben\n\n• Auf- und Abbau von Messeständen, insbesondere elektrischer Bestandteile\n\n• Kabelverlegung und Installation nach Plänen\n\n• Umsetzung von Schaltplänen\n\n• Unterstützung des Montageteams\n\nQualifikation\n\n• Abgeschlossene Berufsausbildung zum Elektriker\n\n• Reisebereitschaft und Bereitschaft zur Wochenendarbeit\n\n• Technisches und handwerkliches Verständnis\n\n• Sicheres Lesen von Elektroplänen\n\n• Selbständige, zuverlässige und saubere Arbeitsweise\n\n• Englischkenntnisse von Vorteil, aber nicht zwingend notwendig\n\n• Sympathisches und sicheres Auftreten\n\n• Teamfähigkeit\n\n• Führerschein Klasse B von Vorteil\n\nBenefits\n\n• Umfassende Einarbeitung sowie Unterstützung durch erfahrene Kollegen\n\n• Raum für Mitgestaltung und eine teamorientierte Arbeitsatmosphäre\n\n• Persönliche und fachliche Weiterentwicklung nach Ihren individuellen Bedürfnissen\n\n• Leistungsgerechte Vergütung\n\n• Unbefristeter sicherer Arbeitsplatz\n\n• Freiwillige soziale Leistungen: betriebliche Altersvorsorge, Mitarbeiterrabatte,\n\nGetränke, betriebliches Gesundheitsmanagement, Teamevents\n\nWenn du Deine Zukunft mit uns gestalten möchtest, freuen wir uns Dich kennenzulernen.\n\nBitte schicke Deine Bewerbung unter Angabe Deiner Gehaltsvorstellungen."
example_title: "Contract-3"
- text: "welchen Vertrag oder welche Art von Anstellung?"
context: "Du möchtest für Fahrten in Inzing bezahlt werden? Auf der Suche nach einem Job mit Stundenlohn und echter Versicherung? Dann wird es Zeit, dich mit als unser Lieferbote (m/w/d) auf den Weg zu machen.\n\nUnterwegs\n\nStarte deinen Tag mit deinem eigenen Pedelec (Elektrofahrrad), Fahrrad oder einem unserer Firmen-Pedelecs. Als unser Kurierfahrer (m/w/d) beförderst du schmackhafte Gerichte durch deine Stadt – holst sie im Restaurant ab und bringst sie zu unseren Foodies. Es macht so viel Spaß und ist so einfach, wie es sich anhört!\n\nSo machen wir Dir das Leben leichter:\n\n• Bereitstellung deiner Ausrüstung\n• Hilfe bei der Nachverfolgung von Lieferungen\n• Ein fixer Stundenlohn\n• Zusätzliche Vergütung pro mit deinem Fahrrad (Pedelec) gefahrenen Kilometer\n\nUnser*e Fahrradkurier (m/w/d) ist:\n\n• Du bist mind. 18 Jahre alt\n• Du hast eine Arbeitserlaubnis in Österreich\n• Du hast ein eigenes Smartphone\n• Du hältst dich an die Straßenverkehrsordnung\n• Du besitzt ein straßentaugliches Fahrrad (Pedelec)\n\nDas bieten wir\n\nEs gibt viele Nebenleistungen, wenn du dem -Team beitrittst. Du wirst Folgendes schätzen:\n\n• 13. und 14. Monatslohn\n• Zuschläge für Mehrarbeit und Überstundenarbeit\n• Fairer & etablierter Arbeitgeber\n• Pauschale für die Nutzung des eigenen Telefons oder Fahrrads (Pedelec\n• Trinkgeld! Alle Trinkgelder die du verdienst, darfst du natürlich behalten.\n• Eine echte Versicherung ... wir haben dich abgesichert!\n• 2 Flexibilitäts-Modelle:\n- Wöchentliche Verfügbarkeiten\n- Langfristige Verfügbarkeiten\n• Flexibel kombinierbar mit Hobbys & Ausbildung\n• Flexible Jobmobilität\n• Flexibler Outdoor-Job\n• Festanstellung inkl. Sozialversicherung\n• Top Equipment\n• 1.100 Fahrer:innen-Community\n• Interne Aufstiegsmöglichkeiten\n• Willkommenskultur & Hub-Community\n• Haftpflichtversicherung (bei Schäden ggü. Dritten)\n\nKlicke auf die Schaltfläche Jetzt bewerben."
example_title: "Contract-4"
- text: "welchen Vertrag oder welche Art von Anstellung?"
context: "Sie suchen eine abwechslungsreiche und spannende Aufgabe, bei der Sie selbst-ständig und geregelt arbeiten können?\n\nWir, die SWiCA Conference Technology, planen und installieren seit 20 Jahren erfolgreich hochmoderne Konferenztechnik für unsere deutschen, europäischen und amerikanische Kunden. Unsere Audio- und Videolösungen sind ein Maßstab an\nQualität und modernster Kommunikation in allen Businessbereichen.\n\nUnsere Audio-, Video- und Steuerungslösungen sind ein Maßstab an Qualität und modernster Kommunikation in allen Businessbereichen.\n\nAufgaben\n\nWir suchen für einen unserer Kunden eine permanente vor Ort Betreuung der Veranstaltungs- und Konferenzräume zum nächst möglichen Termin.\n\nIhr Fokus:\n\n• Technische Betreuung internationaler Meetings\n• Vor- und Nachbereitung der technischen Konfigurationen\n• Aufbau und Betreuung der aktuellen technischen Ausstattung\n• Fehlersuche und Beseitigung bei Problemen\n• Installation gelegentlich von Projektoren, Bildschirmen, Mikrofonanlagen und Lautsprechersystemen\n• Dokumentation von Systemen und Abläufen\n• Support von Kundenveranstaltungen\n\nQualifikation\n\nDie Anforderungen:\n\n• Erfolgreich abgeschlossene technische / elektronische Ausbildung, vorzugsweise mit Berufserfahrung\n• Zuverlässige Arbeitsweise\n• Erfahrung in der Installation und im Umgang von elektronischen Geräten\n• Solide Team- & Kommunikationsfähigkeiten\n• Netzwerkkenntnisse sind vorteilhaft - Sichere PC & Windowskenntnisse sind selbstverständlich\n• Aufgeschlossenheit und Lernwilligkeit für neue Technologien\n• Gute Englischkenntnisse\n• Hohes Maß an Kunden- und Serviceorientierung\n\nBenefits\n\nUnser Angebot:\n\n• eine unbefristete Anstellung in Vollzeit\n• eine attraktive Vergütung\n• Trainings und individuelle Weiterbildungen\n• einen sicheren Arbeitsplatz\n• Fahrkostenzuschuss\n• Einkaufsvorteile\n• Interessante Tätigkeiten in einem internationalen Unternehmen\n• ein Tätigkeitsbereich mit Zukunft Interessiert? Dann freuen wir uns auf Ihre aussagekräftigen Bewerbungsunterlagen."
example_title: "Contract-5"
- text: "Welcher Führerschein wird benötigt?"
context: "Eurovia GEMEINSAM BAUEN. ENTDECKEN SIE EUROVIA: ALS MENSCH MIT IDEEN. Moderner Verkehrswegebau geht für uns Hand in Hand mit fachlicher und sozialer Kompetenz. Ob als Teil einer professionellen Kolonne, in der Zentrale oder in verantwortungsvoller Leitungsfunktion – eines ist ganz sicher: Gemeinsam bringen wir durchdachte Infrastruktur auf den Weg. Werden Sie Teil der *EUROVIA Gruppe* und unterstützen uns in der Region *Berlin/Brandenburg, Hamburg/Schleswig-Holstein, Niedersachsen, Nordrhein-Westfalen, Hessen, Rheinland-Pfalz, Baden-Württemberg, Bayern, Sachsen *oder *Thüringen* zum * nächstmöglichen Zeitpunkt* als * Zweiwegebaggerfahrer (m/w/d) im Gleisbau – Quereinsteiger willkommen IHRE AUFGABEN BEI UNS * Sie bedienen die Zweiwegebagger sicher und vorschriftsmäßig bei Bau-, Reparatur- und Instandhaltungsarbeiten im Gleis- und Tiefbau * Sie bereiten Sperr- und Rangierfahrten vor und führen diese durch * Sie führen Bremsproben und wagentechnische Untersuchungen durch * Sie fahren Ihre Einsatzorte selbstständig an WAS UNS BEGEISTERT * Sie verfügen über Berufserfahrung als Baugeräteführer im Gleisbereich der Deutschen Bahn und die Qualifikation zum Triebfahrzeugführer für Zweiwegebagger gemäß Triebfahrzeugführerschein-Verordnung (TfV) * Sie haben den Führerschein Klasse B * Sie bringen eine hohe Einsatzbereitschaft und Montagebereitschaft (deutschlandweit) mit * Sie besitzen ein Höchstmaß an Verantwortungsbewusstsein. * Sie setzen die Vorgaben an Sicherheit und Arbeitssicherheit auf den Baustellen konsequent und zuverlässig durch * Sie verfügen über manuelles Geschick, haben technisches Verständnis sowie Spaß am Umgang mit Baumaschinen und technischen Geräten * Sie sind teamfähig, arbeiten umsichtig und unterstützen aktiv die positive Sicherheitskultur unseres Unternehmens * Sie erkennen Schwachstellen, Mängel und Verbesserungspotenziale und kommunizieren diese zeitnah. Sie haben noch keine Qualifikation zum Triebfahrzeugführer, möchten sich aber gerne dazu qualifizieren lassen? Dann bieten wir Ihnen die Möglichkeit dazu, wenn Sie folgende Voraussetzungen erfüllen: * Erfahrung in der Bedienung von Baugeräten/Baggern * Lern- und Begeisterungsfähigkeit (Qualifikation dauert mehrere Wochen und schließt mit einer Prüfung ab) * Hohe Belastbarkeit in Hinblick auf Stress, Lärm und Verantwortung (Arbeit im Gleisbereich) UNSER ANGEBOT Als wichtiger Teil von EUROVIA arbeiten Sie in der Position des Baugeräteführers auf Zweiwegebaggern deutschlandweit und genießen die Vorteile eines Großkonzerns. Konkret heißt das: Weiterbildung, die genau auf Sie zugeschnitten ist. Denn wir möchten, dass Sie mit uns gut vorankommen. Ebenso halten wir attraktive Entwicklungsmöglichkeiten für Sie bereit, die Sie ebenso begeistern werden wie unsere derzeit 38.000 Mitarbeiter und Mitarbeiterinnen weltweit. *Darüber hinaus bieten wir Ihnen Folgendes:* * Eine unbefristete Anstellung * Gründliche Einarbeitung und permanente Unterstützung im Team einer Arbeitskolonne * Ganzjährige Beschäftigung ohne saisonale Ausfallzeiten * Gute tarifvertragliche Vergütung inkl. Zuschlägen für Nacht-, Wochenend- und Feiertagsarbeit sowie Verpflegungsmehraufwand * Attraktives Prämiensystem * Möglichkeit zur Teilnahme an unserem Mitarbeiterbeteiligungsprogramm * Moderne und attraktive Rahmenbedingungen, in denen Sie sich mit Ihren Stärken voll entfalten können * Flache Hierarchien und optimale Möglichkeiten, eigene Ideen ins Unternehmen einzubringen * Großzügige Planungshorizonte, planbarer Freizeitausgleich * Mindestens jedes zweite Wochenende vollständig frei * Modernste Zweiwegebagger inkl. modernster Anbaugeräte * Firmenfahrzeug mit Tankkarte, auch zur privaten Nutzung IHRE BEWEBUNG Nehmen Sie Kontakt auf, wir freuen uns auf Ihre Bewerbung inkl. Angabe der gewünschten Region über unsere [Karriereseite](https://www.eurovia.de/#karriere). VINCI Construction Shared Services GmbH Frau Nina Mecking • Personal • Rheinbabenstraße 75 • 46240 Bottrop Tel. +49 2041 792-382 [www.eurovia.de](http://www.eurovia.de)"
example_title: "License Type-1"
- text: "Welcher Führerschein wird benötigt?"
context: "Sie wollen sich beruflich verändern und suchen nach einer neuen Herausforderung?\n\nDann sind Sie bei uns genau richtig Wir suchen ab sofort für einen geschätzten Kunden aus Schorndorf einen Lagermitarbeiter (m/w/d) in Vollzeit.\nWird also höchste Zeit, dass Sie zu uns kommen !\n\nIhre Aufgaben\n\nkommissionieren von Holzprodukten\nVerpacken und Prüfung der kommissionierten Waren\nerstellen von Versandpapieren\nBe- und Entladen von LKWs\nAllgemeine Lagertätigkeiten\n\nDas bringen Sie mit\n\nKenntnisse in der Kommissionierung sind wünschenswert\nStaplerführerschein von Vorteil\nEDV-Kenntnisse\nHohe Motivation & Teamfähigkeit\n\nIhre Vorteile\nDas bieten wir Ihnen:\nUnbefristeter Arbeitsvertrag - und das von Anfang an\ntariflich geregelte Rahmenbedingungen wie z.B. Weihnachts- und Urlaubsgeld, Schicht- und Branchenzuschläge, VWL usw.\nlangfristige Einsätze bei namhaften Unternehmen in der Region mit Option auf eine feste Übernahme\nEmpfehlungsprämie\nWeiterbildungs- und Entwicklungsmöglichkeiten in unserer eigenen Akademie\nArbeitsmedizinische Betreuung und kostenfreie Arbeitskleidung\nTOP-Betreuung: Ihr persönlicher Ansprechpartner in der Niederlassung, der sich jederzeit um Ihre Anliegen kümmert\n\nBester Arbeitgeber\nWir gehören zu den 35 besten Arbeitgebern in Deutschland (Auswertung von zeit.de in Kooperation mit kununu.com)\n\nAls Spezialist für gewerbliche, kaufmännische und technische Hilfs-, Fach- sowie Führungskräfte orientieren wir uns an dem, was sich im Bereich der Jobsuche bewährt hat.\n\nKlingt gut? Dann bewerben Sie sich jetzt!"
example_title: "License Type-2"
- text: "Welcher Führerschein wird benötigt?"
context: "Wir suchen zum nächstmöglichen Zeitpunkt eine Küchenhilfe (m/w/d) in Potsdam - Charlottenhof für 25 - 30 h/ Woche in einem Altenpflegeheim. Es erwartet Sie ein krisensicherer Arbeitsplatz mit leistungsgerechter Vergütung.\n\nAufgaben\n\n• Vor- und Zubereitung von Frühstück, Vesper, Abendbrot z. B. für Heimbewohner\n• Speisenportionierung und Verteilung im Speisesaal und auf den Zimmern der Einrichtung\n• Reinigung und Abwasch von Geschirr, Küchenutensilien und Produktionsgeräten\n• Reinigung und Desinfektion der Küche nach Plan\n\nQualifikation\n\n• Sie lieben die Arbeit mit Menschen und pflegen einen höflichen und korrekten Umgang mit Heimbewohnern und Kunden\n• Zuverlässigkeit, Belastbarkeit, Lern- und Leistungsbereitschaft\n• Sie sprechen und verstehen gut Deutsch\n• Bereitschaft auch am Wochenende zu arbeiten\n\nBenefits\n\n• unbefristete Anstellung\n• pünktliche und leistungsgerechte Entlohnung (ab 12,50 €)\n• vergünstigte Mitarbeiterverpflegung\n• leistungsorientierte Zusatzzahlung\n• Weihnachts- und Urlaubsgeld\n• geregelte Arbeitszeiten (25 - 30 h/ Woche oder Minijob, 5 - Tage Woche, mögliche Arbeitstage: Mo - So, 2 Schichten: 06.00 Uhr - 13.30 Uhr und 11.30 Uhr - 19.00 Uhr, i. d. R. jedes 2. WE frei)\n• auf die Bedürfnisse der Mitarbeiter abgestimmte Dienstpläne\n• Dienstkleidung wird kostenfrei zur Verfügung gestellt\n• kostenfreie Getränkeversorgung (Kaffee, Tee, Wasser)\n• intensive Einarbeitung\n• schnelle Entscheidungswege und direkte Ansprechpartner\n• das Nachweisheft für Beschäftigte im Umgang mit Lebensmitteln nach § 43 Abs. 5 Infektionsschutzgesetzes können Sie bei uns erlangen\n\nSeien Sie mutig! Es erfolgt eine intensive Einarbeitung. Gern können Sie vor der Aufnahme einer Beschäftigung bei uns die Arbeitsabläufe und unser Unternehmen in einer Probearbeit kennenlernen.\n\nIhr Kontakt zu unserem RWS Team:\nRWS Cateringservice GmbH\nLilli Liegmann\nAm alten Flughafen 1\n04356 Leipzig\nTel.: 0341/9170469\n\nOder gleich per SMS: mit dem Stichwort “Bewerbung“ an die 0151/15352101.\nWir rufen Sie garantiert innerhalb von 24 Stunden zurück!\n\nWIR FREUEN UNS, SIE KENNEN ZU LERNEN."
example_title: "License Type-3"
- text: "Welcher Führerschein wird benötigt?"
context: "Aufgaben\n\nZurzeit suchen wir einen Elektrotechnikermeister m/w/d als Projektleiter zur selbständigen Abwicklung mehrerer Bauvorhaben. Dazu gehört die technische, wie auch kaufmännische Projektbetreuung.\n\nQualifikation\n\n• Uns ist wichtig:\n• Eine gelebte Leidenschaft für Ihren Beruf mit der Bereitschaft sich weiterzubilden\n• Ein kundenorientiertes Denken Die Fähigkeit, technisch sowie kaufmännisch Ihre Baustellen zu beaufsichtigen\n• Selbständiges und gewissenhaftes Arbeiten\n• Teamfähigkeit\n\nBenefits\n\n• Einen unbefristeten Vertrag\n• Die Sicherheit einer bereits mehr als 55 Jahren bestehenden Firma\n• Übertarifliche Zahlung\n• Regelmäßige Fortbildungen\n• Eine betriebliche Altersversorgung, für die wir 720 Euro jährlich für Sie einzahlen\n• Weihnachtsgeld\n• Jahresprämie\n• Betriebliches Gesundheitsmanagement\n• Firmenhandy und Firmenlaptop\n• Firmenfahrzeug auch zur privaten Nutzung\n\nWir sind ein Inhabergeführtes Unternehmen, das bereits seit 1966 in der Würzburger Umgebung agiert. Wir legen neben guter Arbeit für unsere Kunden auch viel Wert auf ein gutes Betriebsklima, das auch in gemeinsamen Unternehmungen miteinander (Grillfeste, Biergarten, Weihnachtsfeier, Stammtisch) gepflegt wird."
example_title: "License Type-4"
- text: "ist Homeoffice in dieser Position möglich?"
context: "Gemeinsam mit unserem Partner - Craftnote - suchen wir vielleicht genau Dich?\n\nUnser Partner definiert die Arbeit in einem der am stärksten digital unterversorgten Märkte in Europa neu. Mit klarer Logik, intuitiven Design und dem direkten Draht zu deren Kunden verändert Craftnote die Art und Weise, wie Bauprojekte ablaufen. Das Team von mehr als 30 Mitarbeiter:innen aus 12 verschiedene Ländern stellt sich dieser Herausforderung jeden Tag aufs Neue.\n\nAufgaben\n\n• Du betreust von der Produkt-Demo bis hin zur Vertragsverhandlung und -abschluss den gesamten Prozess\n• Crafnote hat eine hohe Lead-Qualität (warme Leads) und wenig administrativen Aufwand\n• Du verkaufst ein Produkt, welches die Arbeit vieler Menschen vereinfacht und innerhalb der Kundengruppe erwiesenermaßen angenommen wird\n• Austausch und Ideen sind willkommen, gemeinsame Ziele werden in Zusammenarbeit und teamübergreifend erzielt\n• Crafnotes Sales-Zyklen sind kurz und Du wirst mit schnellen Ergebnissen belohnt\n• Du übernimmst die Betreuung der Bestandskunden und baust zusätzlich Deinen Kundenstamm auf - Upsellings und Cross-Sellings sind möglich\n• Eigene Ideen bei der Mitgestaltung der Sales-Prozesse sind immer erwünscht\n• Sowohl bei der Ausstattung als auch bei unseren Prozessen\n\nQualifikation\n\n• Du hast Erfahrung im Vertrieb von erklärungsbedürftigen Produkten, wünschenswert im B2B-Bereich\n• Du bist ein flexibler Schnelldenker, der bereit ist, Ideen zu testen\n• Du bist bekannt für Deine kommunikative und offene Art und kannst andere schnell begeistern\n• Dein Closing Talent zeichnet Dich aus\n• Du verstehst die Bedürfnisse Deiner Kunden und kannst auf diese empathisch eingehen\n• Du bist ein Netzwerker und Teamplayer\n• Du findest Dich allgemein gut in Softwareanwendungen zurecht\n• Du hast Lust Dich in einem agilen Team weiterzuentwickeln und Neues zu lernen\n• Du sprichst Deutsch auf Muttersprachenniveau\n\nBenefits\n\n• Persönliche Entwicklung: Ein jährliches Budget für Schulungen und Weiterbildungen, regelmäßige 360° Feedbacks und eine offene Feedbackkultur\n• Ein tolles Team: Craftnote ist leidenschaftlich bei dem, was sie tun, und möchten, dass Du es auch bist: sie glauben an gemeinsame Ideen- und Entscheidungsfindungen\n• Work-Life-Balance: Flexible Arbeitszeiten und Home-Office-Regelungen und 30 Tage Urlaub\n• Solide Vergütung: Unbefristeter Arbeitsvertrag mit attraktivem Gehalt, Zuschuss zum BVG-Firmenticket oder Deiner Mitgliedschaft bei Urban Sports Club und betriebliche Altersvorsorge\n• Modernste Technik: Hardware nach Wunsch, ergonomische Schreibtische und Stühle sowie Tageslichtlampen um gegen den Berliner Winter anzukämpfen\n\nInteresse mehr herauszufinden?\n\nDann freue ich mich auf Deine Bewerbung!"
example_title: "Home-office-1"
- text: "ist Homeoffice in dieser Position möglich?"
context: "Herth+Buss ist ein mittelständiges, unabhängiges und inhabergeführtes Familienunternehmen mit 250 Mitarbeitern, welches seit über 90 Jahren auf globalisierten Märkten tätig ist. Wir sind Spezialist für Fahrzeugelektrik und Verschleißteile für asiatische Kraftfahrzeuge im internationalen Independent Aftermarket. Kunden- und Mitarbeiterzufriedenheit genießen bei uns den höchsten Stellenwert.\n\nZum nächstmöglichen Zeitpunkt suchen wir einen\n\nTechnischen Vertriebsaußendienst (m/w/d) für Werkstattausrüstung\n\nIhre Leidenschaft liegt im Kfz-Bereich und Ihr Herz schlägt für den Außendienst? Dann sind Sie bei uns genau richtig! Sie sind entweder bei unseren Kunden vor Ort oder aus dem Mobile-Office heraus tätig. Ein Firmenwagen zur dienstlichen und privaten Nutzung wird Ihnen gestellt. Sie sind spezialisiert auf unsere Produkte für die Werkstattausrüstung und nach einer intensiven Einarbeitung und Schulung unserer Produkte unterstützen Sie unsere Kunden und bringen unser Werkstattausrüstungs-Sortiment voran. Wir freuen uns auf Ihre Bewerbung!\n\nTätigkeitsbeschreibung\n\n• Vertrieb von Kfz-Teilen mit Schwerpunkt auf Kfz-Diagnosegeräten, ADAS Kalibriergeräten und Wallboxen E-Auto\n• Kundenbearbeitung, (technische) Beratung unserer Zielkunden und Gewinnung von Neukunden\n• Ansprechpartner für Kunden sowie Beantwortung technischer Fragen\n• Regelmäßige Kundenbesuche und Präsentation von Produkten\n• Durchführung von Technik-Schulungen sowie Besuch von Messen und Hausmessen\n• Dokumentation der Verkaufs- und Kundenaktivitäten im CRM-System (SAP Sales Cloud)\n• Marktbeobachtung und -forschung\n\nAnforderungsprofil\n\n• Abgeschlossene technische Ausbildung im Kfz-Gewerbe oder Kaufmännische Ausbildung mit technischer Weiterbildung im Kfz-Gewerbe\n• Vertriebserfahrung für Werkstattausrüstung (mindestens 3 Jahre)\n• Sehr gutes technisches Verständnis (Kfz) sowie gültige Fahrerlaubnis (Klasse B)\n• Gute Deutschkenntnisse in Wort und Schrift\n• Englischkenntnisse von Vorteil\n• Flexibilität und Reisebereitschaft\n• Kundenorientierte und selbstständige Arbeitsweise und eine kommunikationsstarke Persönlichkeit\n• Hohe Einsatzbereitschaft sowie ein sicheres und freundliches Auftreten\n\nWir bieten Ihnen…\n\n…einen unbefristeten Arbeitsvertrag mit 30 Tagen Urlaub im Kalenderjahr. Sie arbeiten in einem internationalen Arbeits\xadumfeld mit neuester Büro- und IT-Technik und haben die Möglichkeit auf einen interessanten und sicheren Arbeitsplatz in einem sich ständig weiterentwickelnden Unternehmen. Nutzen Sie außerdem unsere Weiterbildungs\xadangebote, unsere sozialen Leistungen und sichern Sie sich eine langfristige Jobperspektive bei Herth+Buss.\n\nGleitzeit\n\nBei uns können Sie Ihre Arbeitszeit in Abstimmung mit Ihrer Führungs\xadkraft frei gestalten. Dies bedeutet, dass Sie Ihren Arbeitstag nicht zu einer festgelegten Zeit beginnen oder beenden müssen, sondern innerhalb einer bestimmten Zeitspanne frei gestalten können. So sind Sie flexibel und können Ihre privaten Besorgungen und Termine optimal in Ihren Arbeitsalltag integrieren.\n\nBetriebsrestaurant\n\nDas Angebot in unserem Betriebsrestaurant wird subventioniert. Sie dürfen zwischen Suppen, Salaten, Menüs und Desserts wählen. Auch zum Frühstück bieten wir verschiedene Leckereien zu günstigen Preisen an. Durch Menüvorschläge können Sie selbst am Speiseplan mitwirken.\n\nBetriebliche Altersversorgung\n\nSie möchten für Ihr Alter vorsorgen? Dann können Sie über uns eine Direkt\xadver\xadsi\xadche\xadrung abschließen. Hierbei können Gehaltsbestandteile oder Sonder\xadzah\xadlungen (Urlaubs- und Weihnachtsgeld) in die betriebliche Altersvorsorge eingebracht werden.\n\nHerth+Buss aktiv\n\nDie Gesundheit der Mitarbeiter ist uns eine Herzensangelegenheit. Deshalb bezuschussen wir die Mitgliedschaft in ausgewählten Fitnessstudios, organisieren Sportevents und unterstützen unsere Mitarbeiter bei der Rauchentwöhnung. Jährliche Grippeschutzimpfungen gehören ebenso zu unserem Aktiv-Programm, wie kostenlose Sehtests.\n\nVermögenswirksame Leistungen\n\nWir unterstützen Sie bei der langfristigen Geldanlage. Und das mit dem Höchst\xadbetrag von 40 Euro im Monat. Die Art der Geldanlage, ob z. B. Spar- oder Bau\xadspar\xadvertrag, ist dabei frei wählbar.\n\nBetriebsveranstaltungen\n\nSpaß muss sein! Daher kommen bei uns Betriebsfeiern nicht zu kurz. Jährlich findet unsere Weihnachtsfeier statt, bei der wir gemeinsam das Jahr ausklingen lassen. Ebenfalls einmal im Jahr kommen wir bei unserem Betriebsfest zusammen. Hier lassen wir uns jedes Mal was Neues einfallen – lassen Sie sich überraschen! Bei unseren Fußballturnieren haben Sie die Möglichkeit, gemeinsam mit Ihrem Team um den Unternehmenspokal zu spielen!\n\nFühlen Sie sich angesprochen?\n\nDann senden Sie Ihre Bewerbungsunterlagen schriftlich oder elektronisch im PDF-Format an:\n\nHerth+Buss Fahrzeugteile GmbH & Co. KG Personalentwicklung Dieselstraße 2-4 i 63150 Heusenstamm"
example_title: "Home-office-2"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gbert-base_QA
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on the germanquad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu102
- Datasets 2.8.0
- Tokenizers 0.13.2
|
CLTL/icf-levels-etn | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3377
- Accuracy: 0.8857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6358 | 0.25 | 300 | 1.4478 | 0.4857 |
| 1.5781 | 1.25 | 600 | 0.8536 | 0.7 |
| 0.0676 | 2.25 | 900 | 0.9275 | 0.7857 |
| 1.1855 | 3.25 | 1200 | 0.3377 | 0.8857 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
CLTL/icf-levels-ins | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 32 | null | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1323
- Accuracy: 0.9571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1539 | 0.25 | 150 | 1.0269 | 0.6 |
| 0.6807 | 1.25 | 300 | 0.3809 | 0.8429 |
| 0.1908 | 2.25 | 450 | 0.4966 | 0.8429 |
| 0.445 | 3.25 | 600 | 0.1323 | 0.9571 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
CLTL/icf-levels-mbw | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- landscape
widget:
- text: professional photo of fforiver river running alongside the Colosseum in Rome
---
# DreamBooth model for the fforiver concept trained on the CCMat/forest-river dataset.
This is a Stable Diffusion model fine-tuned on the fforiver concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of fforiver river**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `river` images for the landscape theme.
Pretrained Model: nitrosocke/elden-ring-diffusion
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('CCMat/fforiver-river')
image = pipeline().images[0]
image
```
## Samples
Prompt: "fforiver river in front of the The Taj Mahal, professional photograph"

<br>
Prompt: "Fallout concept of fforiver river in front of Chichén Itzá in Mexico, sun rays, unreal engine 5"

<br>
Prompt: "high quality photo of fforiver river along the Colosseum in Rome"

<br>
Prompt: "Oil painting of fforiver river in front of the Machu Picchu"

<br>
|
CM-CA/Cartman | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.6188 | 1.0 | 3928 | 0.6062 |
| 0.5107 | 2.0 | 7856 | 0.5583 |
| 0.4381 | 3.0 | 11784 | 0.5480 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
CNT-UPenn/RoBERTa_for_seizureFrequency_QA | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2023-01-17T15:47:22Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 50.90 +/- 23.99
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
CTBC/ATS | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-17T15:53:20Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.48 +/- 0.14
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CZWin32768/xlm-align | [
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:2106.06381",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2023-01-17T15:57:47Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1-mod
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Caddy/UD | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-17T15:59:12Z | <p> Model A: Basil mix fixed</p>
<p> Model B: AnythingV3-pruned</p>
<p> Weight: 1,0.9,0.7,0.5,0.3,0.1,1,1,1,1,1,1,0,0,0,0,0,0,0,0.1,0.3,0.5,0.7,0.9,1 </p>
<p> Base alpha: 0</p> |
Callidior/bert2bert-base-arxiv-titlegen | [
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"en",
"dataset:arxiv_dataset",
"transformers",
"summarization",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| summarization | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 145 | 2023-01-17T16:01:10Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="meganstodel/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CallumRai/HansardGPT2 | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | 2023-01-17T16:04:40Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="meganstodel/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Carlork314/Carlos | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-17T16:28:04Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1319.92 +/- 92.68
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CasualHomie/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: deberta-large-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: sst2
split: train
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9495412844036697
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-large-finetuned-sst2
This model is a fine-tuned version of [microsoft/deberta-large](https://huggingface.co/microsoft/deberta-large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2159
- Accuracy: 0.9495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1214 | 1.0 | 4210 | 0.1969 | 0.9438 |
| 0.067 | 2.0 | 8420 | 0.2159 | 0.9495 |
| 0.0405 | 3.0 | 12630 | 0.2159 | 0.9495 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Cat/Kitty | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-17T16:30:29Z | ---
language:
- fa
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Fa - BuzzyBuzzy
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: fa
split: test
args: 'config: fa, split: test'
metrics:
- name: Wer
type: wer
value: 37.437042998071405
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Fa - BuzzyBuzzy
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3618
- Wer: 37.4370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2714 | 0.43 | 1000 | 0.4762 | 46.3988 |
| 0.1943 | 0.86 | 2000 | 0.3967 | 40.7573 |
| 0.1154 | 1.29 | 3000 | 0.3770 | 38.4124 |
| 0.1037 | 1.72 | 4000 | 0.3618 | 37.4370 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
## Weights & Biases project page
[Whisper-Small-Fa-BuzzyBuzzy](https://wandb.ai/mtaesiri/Whisper-Small-Fa-BuzzyBuzzy)
|
Cdial/hausa-asr | [
"wav2vec2",
"automatic-speech-recognition",
"ha",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2023-01-17T16:32:31Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: modelv3_WS_CV1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modelv3_WS_CV1
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2507
- Ame: {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22}
- Anguage: {'precision': 0.7608695652173914, 'recall': 0.7777777777777778, 'f1': 0.7692307692307693, 'number': 45}
- Du Degree: {'precision': 0.7857142857142857, 'recall': 0.8461538461538461, 'f1': 0.8148148148148148, 'number': 65}
- Du End Date: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}
- Du University: {'precision': 0.8666666666666667, 'recall': 0.8571428571428571, 'f1': 0.861878453038674, 'number': 91}
- Ears Ex: {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24}
- Er Name: {'precision': 0.4, 'recall': 0.6666666666666666, 'f1': 0.5, 'number': 6}
- Kill: {'precision': 0.9733727810650887, 'recall': 0.9426934097421203, 'f1': 0.9577874818049491, 'number': 349}
- Ractice: {'precision': 0.4838709677419355, 'recall': 0.625, 'f1': 0.5454545454545454, 'number': 24}
- Rade: {'precision': 0.8461538461538461, 'recall': 0.9166666666666666, 'f1': 0.8799999999999999, 'number': 24}
- Ummarize: {'precision': 1.0, 'recall': 0.9963636363636363, 'f1': 0.9981785063752276, 'number': 550}
- Xpertise: {'precision': 0.8571428571428571, 'recall': 0.9836065573770492, 'f1': 0.916030534351145, 'number': 61}
- X Company: {'precision': 0.9698275862068966, 'recall': 0.9868421052631579, 'f1': 0.9782608695652174, 'number': 228}
- X Description: {'precision': 0.9955926699141731, 'recall': 0.976120081874005, 'f1': 0.9857602204869087, 'number': 4397}
- X End Date: {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 2}
- X Location: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 4}
- X Position: {'precision': 0.8416075650118203, 'recall': 0.978021978021978, 'f1': 0.9047013977128335, 'number': 364}
- X Start Date: {'precision': 0.2, 'recall': 0.14285714285714285, 'f1': 0.16666666666666666, 'number': 7}
- Overall Precision: 0.9706
- Overall Recall: 0.9692
- Overall F1: 0.9699
- Overall Accuracy: 0.9603
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Ame | Anguage | Du Degree | Du End Date | Du University | Ears Ex | Er Name | Kill | Ractice | Rade | Ummarize | Xpertise | X Company | X Description | X End Date | X Location | X Position | X Start Date | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------:|:-------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.3524 | 1.0 | 54 | 0.7720 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 22} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 45} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 91} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.5976095617529881, 'recall': 0.8595988538681948, 'f1': 0.7050528789659224, 'number': 349} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 24} | {'precision': 0.7203166226912929, 'recall': 0.9927272727272727, 'f1': 0.834862385321101, 'number': 550} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 61} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 228} | {'precision': 0.9106833910034602, 'recall': 0.9576984307482375, 'f1': 0.9335993792262498, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.56, 'recall': 0.038461538461538464, 'f1': 0.07197943444730077, 'number': 364} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | 0.8582 | 0.8095 | 0.8332 | 0.7993 |
| 0.6502 | 2.0 | 108 | 0.4169 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 22} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 45} | {'precision': 0.30057803468208094, 'recall': 0.8, 'f1': 0.4369747899159664, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.5853658536585366, 'recall': 0.26373626373626374, 'f1': 0.36363636363636365, 'number': 91} | {'precision': 0.3333333333333333, 'recall': 0.5416666666666666, 'f1': 0.4126984126984126, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.98125, 'recall': 0.8997134670487106, 'f1': 0.9387144992526159, 'number': 349} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 24} | {'precision': 0.9927007299270073, 'recall': 0.9890909090909091, 'f1': 0.9908925318761385, 'number': 550} | {'precision': 0.7647058823529411, 'recall': 0.21311475409836064, 'f1': 0.3333333333333333, 'number': 61} | {'precision': 1.0, 'recall': 0.013157894736842105, 'f1': 0.025974025974025976, 'number': 228} | {'precision': 0.9651293588301463, 'recall': 0.9756652262906527, 'f1': 0.9703686948654152, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.5491525423728814, 'recall': 0.8901098901098901, 'f1': 0.6792452830188679, 'number': 364} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | 0.9030 | 0.8903 | 0.8966 | 0.8785 |
| 0.4689 | 3.0 | 162 | 0.3124 | {'precision': 0.9, 'recall': 0.8181818181818182, 'f1': 0.8571428571428572, 'number': 22} | {'precision': 0.6666666666666666, 'recall': 0.044444444444444446, 'f1': 0.08333333333333334, 'number': 45} | {'precision': 0.3732394366197183, 'recall': 0.8153846153846154, 'f1': 0.5120772946859904, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.6533333333333333, 'recall': 0.5384615384615384, 'f1': 0.5903614457831325, 'number': 91} | {'precision': 0.3225806451612903, 'recall': 0.4166666666666667, 'f1': 0.3636363636363636, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9876543209876543, 'recall': 0.9169054441260746, 'f1': 0.9509658246656761, 'number': 349} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 24} | {'precision': 0.975, 'recall': 0.9927272727272727, 'f1': 0.9837837837837837, 'number': 550} | {'precision': 0.9607843137254902, 'recall': 0.8032786885245902, 'f1': 0.8750000000000001, 'number': 61} | {'precision': 0.8333333333333334, 'recall': 0.5482456140350878, 'f1': 0.6613756613756614, 'number': 228} | {'precision': 0.9841676367869616, 'recall': 0.9613372754150558, 'f1': 0.9726184997699034, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.6236162361623616, 'recall': 0.9285714285714286, 'f1': 0.7461368653421634, 'number': 364} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | 0.9264 | 0.9159 | 0.9211 | 0.9070 |
| 0.3417 | 4.0 | 216 | 0.2384 | {'precision': 0.4782608695652174, 'recall': 0.5, 'f1': 0.4888888888888889, 'number': 22} | {'precision': 0.75, 'recall': 0.6666666666666666, 'f1': 0.7058823529411765, 'number': 45} | {'precision': 0.4028776978417266, 'recall': 0.8615384615384616, 'f1': 0.5490196078431372, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8666666666666667, 'recall': 0.2857142857142857, 'f1': 0.42975206611570255, 'number': 91} | {'precision': 0.6296296296296297, 'recall': 0.7083333333333334, 'f1': 0.6666666666666667, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9906542056074766, 'recall': 0.9111747851002865, 'f1': 0.9492537313432836, 'number': 349} | {'precision': 0.5714285714285714, 'recall': 0.16666666666666666, 'f1': 0.25806451612903225, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 24} | {'precision': 0.992633517495396, 'recall': 0.98, 'f1': 0.9862763037511437, 'number': 550} | {'precision': 0.4956521739130435, 'recall': 0.9344262295081968, 'f1': 0.6477272727272727, 'number': 61} | {'precision': 0.890295358649789, 'recall': 0.9254385964912281, 'f1': 0.9075268817204301, 'number': 228} | {'precision': 0.9927075982121853, 'recall': 0.9597452808733227, 'f1': 0.9759481961147086, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.8300970873786407, 'recall': 0.9395604395604396, 'f1': 0.881443298969072, 'number': 364} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | 0.9489 | 0.9309 | 0.9398 | 0.9263 |
| 0.2801 | 5.0 | 270 | 0.1850 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.6444444444444445, 'recall': 0.6444444444444445, 'f1': 0.6444444444444445, 'number': 45} | {'precision': 0.47540983606557374, 'recall': 0.8923076923076924, 'f1': 0.6203208556149733, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.6666666666666666, 'recall': 0.6373626373626373, 'f1': 0.651685393258427, 'number': 91} | {'precision': 0.875, 'recall': 0.875, 'f1': 0.875, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9937888198757764, 'recall': 0.9169054441260746, 'f1': 0.9538002980625931, 'number': 349} | {'precision': 0.5588235294117647, 'recall': 0.7916666666666666, 'f1': 0.6551724137931034, 'number': 24} | {'precision': 1.0, 'recall': 0.7916666666666666, 'f1': 0.8837209302325582, 'number': 24} | {'precision': 0.9927536231884058, 'recall': 0.9963636363636363, 'f1': 0.9945553539019963, 'number': 550} | {'precision': 0.8, 'recall': 0.9180327868852459, 'f1': 0.8549618320610688, 'number': 61} | {'precision': 0.923728813559322, 'recall': 0.956140350877193, 'f1': 0.939655172413793, 'number': 228} | {'precision': 0.9845977011494252, 'recall': 0.9740732317489197, 'f1': 0.9793071910369269, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.8112745098039216, 'recall': 0.9093406593406593, 'f1': 0.8575129533678757, 'number': 364} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | 0.9507 | 0.9547 | 0.9527 | 0.9411 |
| 0.2181 | 6.0 | 324 | 0.2069 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.8095238095238095, 'recall': 0.7555555555555555, 'f1': 0.7816091954022989, 'number': 45} | {'precision': 0.5257731958762887, 'recall': 0.7846153846153846, 'f1': 0.6296296296296297, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.7948717948717948, 'recall': 0.6813186813186813, 'f1': 0.7337278106508875, 'number': 91} | {'precision': 0.92, 'recall': 0.9583333333333334, 'f1': 0.9387755102040817, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.8598382749326146, 'recall': 0.9140401146131805, 'f1': 0.8861111111111111, 'number': 349} | {'precision': 0.42857142857142855, 'recall': 1.0, 'f1': 0.6, 'number': 24} | {'precision': 1.0, 'recall': 0.7916666666666666, 'f1': 0.8837209302325582, 'number': 24} | {'precision': 0.9981617647058824, 'recall': 0.9872727272727273, 'f1': 0.9926873857404023, 'number': 550} | {'precision': 0.8507462686567164, 'recall': 0.9344262295081968, 'f1': 0.8906250000000001, 'number': 61} | {'precision': 0.9273504273504274, 'recall': 0.9517543859649122, 'f1': 0.9393939393939393, 'number': 228} | {'precision': 0.9936064409187781, 'recall': 0.9542870138730953, 'f1': 0.9735498839907193, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.8042452830188679, 'recall': 0.9368131868131868, 'f1': 0.8654822335025381, 'number': 364} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | 0.9524 | 0.9428 | 0.9476 | 0.9360 |
| 0.1944 | 7.0 | 378 | 0.2228 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.7906976744186046, 'recall': 0.7555555555555555, 'f1': 0.7727272727272727, 'number': 45} | {'precision': 0.46153846153846156, 'recall': 0.9230769230769231, 'f1': 0.6153846153846155, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8888888888888888, 'recall': 0.5274725274725275, 'f1': 0.6620689655172415, 'number': 91} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9142857142857143, 'recall': 0.9169054441260746, 'f1': 0.9155937052932761, 'number': 349} | {'precision': 0.6923076923076923, 'recall': 0.375, 'f1': 0.48648648648648646, 'number': 24} | {'precision': 0.95, 'recall': 0.7916666666666666, 'f1': 0.8636363636363635, 'number': 24} | {'precision': 1.0, 'recall': 0.9927272727272727, 'f1': 0.9963503649635036, 'number': 550} | {'precision': 0.375, 'recall': 0.9344262295081968, 'f1': 0.5352112676056339, 'number': 61} | {'precision': 0.8615384615384616, 'recall': 0.9824561403508771, 'f1': 0.9180327868852458, 'number': 228} | {'precision': 0.9944738106679482, 'recall': 0.9413236297475551, 'f1': 0.9671690618062859, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.8190709046454768, 'recall': 0.9203296703296703, 'f1': 0.8667529107373869, 'number': 364} | {'precision': 0.5, 'recall': 0.7142857142857143, 'f1': 0.588235294117647, 'number': 7} | 0.9424 | 0.9323 | 0.9373 | 0.9288 |
| 0.1623 | 8.0 | 432 | 0.1687 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.8095238095238095, 'recall': 0.7555555555555555, 'f1': 0.7816091954022989, 'number': 45} | {'precision': 0.58, 'recall': 0.8923076923076924, 'f1': 0.703030303030303, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8505747126436781, 'recall': 0.8131868131868132, 'f1': 0.8314606741573034, 'number': 91} | {'precision': 0.9565217391304348, 'recall': 0.9166666666666666, 'f1': 0.9361702127659574, 'number': 24} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9876543209876543, 'recall': 0.9169054441260746, 'f1': 0.9509658246656761, 'number': 349} | {'precision': 0.75, 'recall': 0.5, 'f1': 0.6, 'number': 24} | {'precision': 1.0, 'recall': 0.7916666666666666, 'f1': 0.8837209302325582, 'number': 24} | {'precision': 0.9927797833935018, 'recall': 1.0, 'f1': 0.996376811594203, 'number': 550} | {'precision': 0.9333333333333333, 'recall': 0.9180327868852459, 'f1': 0.9256198347107439, 'number': 61} | {'precision': 0.96, 'recall': 0.9473684210526315, 'f1': 0.9536423841059603, 'number': 228} | {'precision': 0.9812442817932296, 'recall': 0.9756652262906527, 'f1': 0.9784468012316113, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.8141361256544503, 'recall': 0.8543956043956044, 'f1': 0.8337801608579088, 'number': 364} | {'precision': 0.5384615384615384, 'recall': 1.0, 'f1': 0.7000000000000001, 'number': 7} | 0.9596 | 0.9561 | 0.9579 | 0.9487 |
| 0.1439 | 9.0 | 486 | 0.1641 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.7659574468085106, 'recall': 0.8, 'f1': 0.7826086956521738, 'number': 45} | {'precision': 0.6712328767123288, 'recall': 0.7538461538461538, 'f1': 0.7101449275362318, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.819047619047619, 'recall': 0.945054945054945, 'f1': 0.8775510204081632, 'number': 91} | {'precision': 0.92, 'recall': 0.9583333333333334, 'f1': 0.9387755102040817, 'number': 24} | {'precision': 0.2857142857142857, 'recall': 0.3333333333333333, 'f1': 0.30769230769230765, 'number': 6} | {'precision': 0.9644970414201184, 'recall': 0.9340974212034384, 'f1': 0.9490538573508006, 'number': 349} | {'precision': 0.6956521739130435, 'recall': 0.6666666666666666, 'f1': 0.6808510638297872, 'number': 24} | {'precision': 1.0, 'recall': 0.7916666666666666, 'f1': 0.8837209302325582, 'number': 24} | {'precision': 1.0, 'recall': 0.9818181818181818, 'f1': 0.9908256880733944, 'number': 550} | {'precision': 0.8507462686567164, 'recall': 0.9344262295081968, 'f1': 0.8906250000000001, 'number': 61} | {'precision': 0.9534883720930233, 'recall': 0.8991228070175439, 'f1': 0.9255079006772009, 'number': 228} | {'precision': 0.984177940839257, 'recall': 0.976120081874005, 'f1': 0.9801324503311258, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.8287153652392947, 'recall': 0.9038461538461539, 'f1': 0.8646517739816033, 'number': 364} | {'precision': 0.5384615384615384, 'recall': 1.0, 'f1': 0.7000000000000001, 'number': 7} | 0.9610 | 0.9590 | 0.9600 | 0.9494 |
| 0.122 | 10.0 | 540 | 0.1964 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 45} | {'precision': 0.8947368421052632, 'recall': 0.7846153846153846, 'f1': 0.8360655737704918, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.898876404494382, 'recall': 0.8791208791208791, 'f1': 0.8888888888888888, 'number': 91} | {'precision': 0.9565217391304348, 'recall': 0.9166666666666666, 'f1': 0.9361702127659574, 'number': 24} | {'precision': 0.3076923076923077, 'recall': 0.6666666666666666, 'f1': 0.42105263157894735, 'number': 6} | {'precision': 0.9876543209876543, 'recall': 0.9169054441260746, 'f1': 0.9509658246656761, 'number': 349} | {'precision': 0.7894736842105263, 'recall': 0.625, 'f1': 0.6976744186046512, 'number': 24} | {'precision': 1.0, 'recall': 0.7916666666666666, 'f1': 0.8837209302325582, 'number': 24} | {'precision': 0.9927536231884058, 'recall': 0.9963636363636363, 'f1': 0.9945553539019963, 'number': 550} | {'precision': 0.7037037037037037, 'recall': 0.9344262295081968, 'f1': 0.8028169014084507, 'number': 61} | {'precision': 0.9451476793248945, 'recall': 0.9824561403508771, 'f1': 0.9634408602150537, 'number': 228} | {'precision': 0.97850445918134, 'recall': 0.9731635205822151, 'f1': 0.9758266818700113, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.8227513227513228, 'recall': 0.8543956043956044, 'f1': 0.8382749326145551, 'number': 364} | {'precision': 0.5384615384615384, 'recall': 1.0, 'f1': 0.7000000000000001, 'number': 7} | 0.9598 | 0.9567 | 0.9583 | 0.9462 |
| 0.1166 | 11.0 | 594 | 0.1719 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.75, 'recall': 0.7333333333333333, 'f1': 0.7415730337078651, 'number': 45} | {'precision': 0.84375, 'recall': 0.8307692307692308, 'f1': 0.8372093023255814, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8526315789473684, 'recall': 0.8901098901098901, 'f1': 0.8709677419354839, 'number': 91} | {'precision': 0.9565217391304348, 'recall': 0.9166666666666666, 'f1': 0.9361702127659574, 'number': 24} | {'precision': 0.36363636363636365, 'recall': 0.6666666666666666, 'f1': 0.4705882352941177, 'number': 6} | {'precision': 0.9847094801223242, 'recall': 0.9226361031518625, 'f1': 0.9526627218934912, 'number': 349} | {'precision': 0.5476190476190477, 'recall': 0.9583333333333334, 'f1': 0.696969696969697, 'number': 24} | {'precision': 0.8, 'recall': 0.8333333333333334, 'f1': 0.816326530612245, 'number': 24} | {'precision': 0.9927797833935018, 'recall': 1.0, 'f1': 0.996376811594203, 'number': 550} | {'precision': 0.9827586206896551, 'recall': 0.9344262295081968, 'f1': 0.9579831932773109, 'number': 61} | {'precision': 0.9487179487179487, 'recall': 0.9736842105263158, 'f1': 0.9610389610389611, 'number': 228} | {'precision': 0.9859834558823529, 'recall': 0.9758926540823288, 'f1': 0.9809121042404846, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.8290398126463701, 'recall': 0.9725274725274725, 'f1': 0.8950695322376737, 'number': 364} | {'precision': 0.5384615384615384, 'recall': 1.0, 'f1': 0.7000000000000001, 'number': 7} | 0.9634 | 0.9674 | 0.9654 | 0.9540 |
| 0.094 | 12.0 | 648 | 0.1789 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f1': 0.6666666666666666, 'number': 45} | {'precision': 0.9111111111111111, 'recall': 0.6307692307692307, 'f1': 0.7454545454545455, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.7565217391304347, 'recall': 0.9560439560439561, 'f1': 0.8446601941747574, 'number': 91} | {'precision': 0.88, 'recall': 0.9166666666666666, 'f1': 0.8979591836734694, 'number': 24} | {'precision': 0.2, 'recall': 0.3333333333333333, 'f1': 0.25, 'number': 6} | {'precision': 0.9846625766871165, 'recall': 0.9197707736389685, 'f1': 0.9511111111111111, 'number': 349} | {'precision': 0.5238095238095238, 'recall': 0.9166666666666666, 'f1': 0.6666666666666667, 'number': 24} | {'precision': 0.8461538461538461, 'recall': 0.9166666666666666, 'f1': 0.8799999999999999, 'number': 24} | {'precision': 1.0, 'recall': 0.990909090909091, 'f1': 0.995433789954338, 'number': 550} | {'precision': 0.890625, 'recall': 0.9344262295081968, 'f1': 0.9120000000000001, 'number': 61} | {'precision': 0.9411764705882353, 'recall': 0.9824561403508771, 'f1': 0.9613733905579399, 'number': 228} | {'precision': 0.9904872389791183, 'recall': 0.9708892426654537, 'f1': 0.980590329619846, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.8091787439613527, 'recall': 0.9203296703296703, 'f1': 0.8611825192802057, 'number': 364} | {'precision': 0.5, 'recall': 0.8571428571428571, 'f1': 0.631578947368421, 'number': 7} | 0.9623 | 0.9583 | 0.9603 | 0.9515 |
| 0.0826 | 13.0 | 702 | 0.2051 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.8222222222222222, 'recall': 0.8222222222222222, 'f1': 0.8222222222222222, 'number': 45} | {'precision': 0.68, 'recall': 0.7846153846153846, 'f1': 0.7285714285714285, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.9080459770114943, 'recall': 0.8681318681318682, 'f1': 0.8876404494382023, 'number': 91} | {'precision': 0.92, 'recall': 0.9583333333333334, 'f1': 0.9387755102040817, 'number': 24} | {'precision': 0.42857142857142855, 'recall': 0.5, 'f1': 0.4615384615384615, 'number': 6} | {'precision': 0.930939226519337, 'recall': 0.9656160458452722, 'f1': 0.9479606188466947, 'number': 349} | {'precision': 0.5454545454545454, 'recall': 0.75, 'f1': 0.631578947368421, 'number': 24} | {'precision': 0.8461538461538461, 'recall': 0.9166666666666666, 'f1': 0.8799999999999999, 'number': 24} | {'precision': 1.0, 'recall': 0.9727272727272728, 'f1': 0.9861751152073733, 'number': 550} | {'precision': 0.5876288659793815, 'recall': 0.9344262295081968, 'f1': 0.7215189873417721, 'number': 61} | {'precision': 0.9453781512605042, 'recall': 0.9868421052631579, 'f1': 0.9656652360515022, 'number': 228} | {'precision': 0.9939081537019682, 'recall': 0.9647486922901979, 'f1': 0.9791113675706867, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.7991169977924945, 'recall': 0.9945054945054945, 'f1': 0.8861689106487147, 'number': 364} | {'precision': 0.5384615384615384, 'recall': 1.0, 'f1': 0.7000000000000001, 'number': 7} | 0.9575 | 0.9607 | 0.9591 | 0.9494 |
| 0.0638 | 14.0 | 756 | 0.1905 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 45} | {'precision': 0.8412698412698413, 'recall': 0.8153846153846154, 'f1': 0.8281250000000001, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8602150537634409, 'recall': 0.8791208791208791, 'f1': 0.8695652173913043, 'number': 91} | {'precision': 0.92, 'recall': 0.9583333333333334, 'f1': 0.9387755102040817, 'number': 24} | {'precision': 0.5, 'recall': 0.6666666666666666, 'f1': 0.5714285714285715, 'number': 6} | {'precision': 0.9760479041916168, 'recall': 0.9340974212034384, 'f1': 0.9546120058565154, 'number': 349} | {'precision': 0.5128205128205128, 'recall': 0.8333333333333334, 'f1': 0.6349206349206349, 'number': 24} | {'precision': 0.7666666666666667, 'recall': 0.9583333333333334, 'f1': 0.8518518518518519, 'number': 24} | {'precision': 1.0, 'recall': 0.9963636363636363, 'f1': 0.9981785063752276, 'number': 550} | {'precision': 0.9827586206896551, 'recall': 0.9344262295081968, 'f1': 0.9579831932773109, 'number': 61} | {'precision': 0.9439655172413793, 'recall': 0.9605263157894737, 'f1': 0.9521739130434783, 'number': 228} | {'precision': 0.9733212751526114, 'recall': 0.9790766431657949, 'f1': 0.9761904761904762, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.8246575342465754, 'recall': 0.8269230769230769, 'f1': 0.8257887517146777, 'number': 364} | {'precision': 0.5384615384615384, 'recall': 1.0, 'f1': 0.7000000000000001, 'number': 7} | 0.9562 | 0.9612 | 0.9587 | 0.9479 |
| 0.0532 | 15.0 | 810 | 0.1943 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.7555555555555555, 'recall': 0.7555555555555555, 'f1': 0.7555555555555555, 'number': 45} | {'precision': 0.75, 'recall': 0.8307692307692308, 'f1': 0.7883211678832116, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8539325842696629, 'recall': 0.8351648351648352, 'f1': 0.8444444444444446, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.5, 'recall': 0.8333333333333334, 'f1': 0.625, 'number': 6} | {'precision': 0.9137466307277629, 'recall': 0.9713467048710601, 'f1': 0.9416666666666668, 'number': 349} | {'precision': 0.5128205128205128, 'recall': 0.8333333333333334, 'f1': 0.6349206349206349, 'number': 24} | {'precision': 0.7666666666666667, 'recall': 0.9583333333333334, 'f1': 0.8518518518518519, 'number': 24} | {'precision': 1.0, 'recall': 0.990909090909091, 'f1': 0.995433789954338, 'number': 550} | {'precision': 0.9193548387096774, 'recall': 0.9344262295081968, 'f1': 0.9268292682926829, 'number': 61} | {'precision': 0.9377593360995851, 'recall': 0.9912280701754386, 'f1': 0.9637526652452025, 'number': 228} | {'precision': 0.9934929119219149, 'recall': 0.9722538094155105, 'f1': 0.982758620689655, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.8345323741007195, 'recall': 0.9560439560439561, 'f1': 0.8911651728553137, 'number': 364} | {'precision': 0.5384615384615384, 'recall': 1.0, 'f1': 0.7000000000000001, 'number': 7} | 0.9634 | 0.9663 | 0.9649 | 0.9537 |
| 0.0455 | 16.0 | 864 | 0.1989 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 45} | {'precision': 0.8928571428571429, 'recall': 0.7692307692307693, 'f1': 0.8264462809917357, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8415841584158416, 'recall': 0.9340659340659341, 'f1': 0.8854166666666666, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 6} | {'precision': 0.9426934097421203, 'recall': 0.9426934097421203, 'f1': 0.9426934097421205, 'number': 349} | {'precision': 0.5483870967741935, 'recall': 0.7083333333333334, 'f1': 0.6181818181818182, 'number': 24} | {'precision': 0.84, 'recall': 0.875, 'f1': 0.8571428571428572, 'number': 24} | {'precision': 1.0, 'recall': 0.9927272727272727, 'f1': 0.9963503649635036, 'number': 550} | {'precision': 0.821917808219178, 'recall': 0.9836065573770492, 'f1': 0.8955223880597014, 'number': 61} | {'precision': 0.9559471365638766, 'recall': 0.9517543859649122, 'f1': 0.9538461538461538, 'number': 228} | {'precision': 0.9907300115874855, 'recall': 0.9722538094155105, 'f1': 0.9814049586776858, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.8170426065162907, 'recall': 0.8956043956043956, 'f1': 0.854521625163827, 'number': 364} | {'precision': 0.5384615384615384, 'recall': 1.0, 'f1': 0.7000000000000001, 'number': 7} | 0.9650 | 0.9601 | 0.9625 | 0.9534 |
| 0.0418 | 17.0 | 918 | 0.1912 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 45} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8191489361702128, 'recall': 0.8461538461538461, 'f1': 0.8324324324324325, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.625, 'recall': 0.8333333333333334, 'f1': 0.7142857142857143, 'number': 6} | {'precision': 0.9701492537313433, 'recall': 0.9312320916905444, 'f1': 0.9502923976608186, 'number': 349} | {'precision': 0.5384615384615384, 'recall': 0.5833333333333334, 'f1': 0.5599999999999999, 'number': 24} | {'precision': 0.7857142857142857, 'recall': 0.9166666666666666, 'f1': 0.8461538461538461, 'number': 24} | {'precision': 1.0, 'recall': 0.9963636363636363, 'f1': 0.9981785063752276, 'number': 550} | {'precision': 0.9090909090909091, 'recall': 0.9836065573770492, 'f1': 0.9448818897637795, 'number': 61} | {'precision': 0.9531914893617022, 'recall': 0.9824561403508771, 'f1': 0.9676025917926566, 'number': 228} | {'precision': 0.9934548854604955, 'recall': 0.966568114623607, 'f1': 0.9798270893371759, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.8271028037383178, 'recall': 0.9725274725274725, 'f1': 0.8939393939393939, 'number': 364} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 7} | 0.9681 | 0.9607 | 0.9644 | 0.9525 |
| 0.0392 | 18.0 | 972 | 0.2001 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 45} | {'precision': 0.8225806451612904, 'recall': 0.7846153846153846, 'f1': 0.8031496062992126, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8791208791208791, 'recall': 0.8791208791208791, 'f1': 0.8791208791208791, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.625, 'recall': 0.8333333333333334, 'f1': 0.7142857142857143, 'number': 6} | {'precision': 0.967551622418879, 'recall': 0.9398280802292264, 'f1': 0.9534883720930233, 'number': 349} | {'precision': 0.5428571428571428, 'recall': 0.7916666666666666, 'f1': 0.6440677966101694, 'number': 24} | {'precision': 0.8695652173913043, 'recall': 0.8333333333333334, 'f1': 0.851063829787234, 'number': 24} | {'precision': 1.0, 'recall': 0.9981818181818182, 'f1': 0.9990900818926297, 'number': 550} | {'precision': 0.8676470588235294, 'recall': 0.9672131147540983, 'f1': 0.9147286821705426, 'number': 61} | {'precision': 0.9487179487179487, 'recall': 0.9736842105263158, 'f1': 0.9610389610389611, 'number': 228} | {'precision': 0.9981255857544518, 'recall': 0.9688423925403684, 'f1': 0.983266012694749, 'number': 4397} | {'precision': 1.0, 'recall': 0.5, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.8081264108352144, 'recall': 0.9835164835164835, 'f1': 0.8872366790582403, 'number': 364} | {'precision': 0.375, 'recall': 0.42857142857142855, 'f1': 0.39999999999999997, 'number': 7} | 0.9703 | 0.9634 | 0.9668 | 0.9572 |
| 0.0345 | 19.0 | 1026 | 0.2219 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 45} | {'precision': 0.7794117647058824, 'recall': 0.8153846153846154, 'f1': 0.7969924812030074, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8651685393258427, 'recall': 0.8461538461538461, 'f1': 0.8555555555555556, 'number': 91} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 24} | {'precision': 0.7142857142857143, 'recall': 0.8333333333333334, 'f1': 0.7692307692307692, 'number': 6} | {'precision': 0.972972972972973, 'recall': 0.9283667621776505, 'f1': 0.9501466275659823, 'number': 349} | {'precision': 0.5161290322580645, 'recall': 0.6666666666666666, 'f1': 0.5818181818181819, 'number': 24} | {'precision': 0.84, 'recall': 0.875, 'f1': 0.8571428571428572, 'number': 24} | {'precision': 1.0, 'recall': 0.9890909090909091, 'f1': 0.9945155393053016, 'number': 550} | {'precision': 0.9032258064516129, 'recall': 0.9180327868852459, 'f1': 0.9105691056910569, 'number': 61} | {'precision': 0.9377593360995851, 'recall': 0.9912280701754386, 'f1': 0.9637526652452025, 'number': 228} | {'precision': 0.9946274234991824, 'recall': 0.9683875369570162, 'f1': 0.9813321041714681, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.8238095238095238, 'recall': 0.9505494505494505, 'f1': 0.8826530612244897, 'number': 364} | {'precision': 0.3333333333333333, 'recall': 0.42857142857142855, 'f1': 0.375, 'number': 7} | 0.9689 | 0.9593 | 0.9641 | 0.9534 |
| 0.0305 | 20.0 | 1080 | 0.2011 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.7391304347826086, 'recall': 0.7555555555555555, 'f1': 0.7472527472527473, 'number': 45} | {'precision': 0.8484848484848485, 'recall': 0.8615384615384616, 'f1': 0.8549618320610687, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8804347826086957, 'recall': 0.8901098901098901, 'f1': 0.8852459016393442, 'number': 91} | {'precision': 0.88, 'recall': 0.9166666666666666, 'f1': 0.8979591836734694, 'number': 24} | {'precision': 0.5555555555555556, 'recall': 0.8333333333333334, 'f1': 0.6666666666666667, 'number': 6} | {'precision': 0.9512893982808023, 'recall': 0.9512893982808023, 'f1': 0.9512893982808023, 'number': 349} | {'precision': 0.5348837209302325, 'recall': 0.9583333333333334, 'f1': 0.6865671641791045, 'number': 24} | {'precision': 0.7586206896551724, 'recall': 0.9166666666666666, 'f1': 0.830188679245283, 'number': 24} | {'precision': 0.9981785063752276, 'recall': 0.9963636363636363, 'f1': 0.997270245677889, 'number': 550} | {'precision': 0.8571428571428571, 'recall': 0.9836065573770492, 'f1': 0.916030534351145, 'number': 61} | {'precision': 0.9567099567099567, 'recall': 0.9692982456140351, 'f1': 0.9629629629629631, 'number': 228} | {'precision': 0.9853580416380691, 'recall': 0.9795314987491471, 'f1': 0.9824361313868614, 'number': 4397} | {'precision': 1.0, 'recall': 0.5, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 1.0, 'recall': 0.75, 'f1': 0.8571428571428571, 'number': 4} | {'precision': 0.8372093023255814, 'recall': 0.8901098901098901, 'f1': 0.862849533954727, 'number': 364} | {'precision': 0.36363636363636365, 'recall': 0.5714285714285714, 'f1': 0.4444444444444444, 'number': 7} | 0.9619 | 0.9679 | 0.9649 | 0.9551 |
| 0.0269 | 21.0 | 1134 | 0.2270 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.7555555555555555, 'recall': 0.7555555555555555, 'f1': 0.7555555555555555, 'number': 45} | {'precision': 0.7536231884057971, 'recall': 0.8, 'f1': 0.7761194029850746, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8777777777777778, 'recall': 0.8681318681318682, 'f1': 0.8729281767955802, 'number': 91} | {'precision': 0.9565217391304348, 'recall': 0.9166666666666666, 'f1': 0.9361702127659574, 'number': 24} | {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f1': 0.6666666666666666, 'number': 6} | {'precision': 0.9287671232876712, 'recall': 0.9713467048710601, 'f1': 0.9495798319327731, 'number': 349} | {'precision': 0.5294117647058824, 'recall': 0.75, 'f1': 0.6206896551724139, 'number': 24} | {'precision': 0.84, 'recall': 0.875, 'f1': 0.8571428571428572, 'number': 24} | {'precision': 0.9927536231884058, 'recall': 0.9963636363636363, 'f1': 0.9945553539019963, 'number': 550} | {'precision': 0.855072463768116, 'recall': 0.9672131147540983, 'f1': 0.9076923076923077, 'number': 61} | {'precision': 0.948051948051948, 'recall': 0.9605263157894737, 'f1': 0.9542483660130718, 'number': 228} | {'precision': 0.9844036697247707, 'recall': 0.976120081874005, 'f1': 0.9802443759278292, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.8310991957104558, 'recall': 0.8516483516483516, 'f1': 0.841248303934871, 'number': 364} | {'precision': 0.4444444444444444, 'recall': 0.5714285714285714, 'f1': 0.5, 'number': 7} | 0.9600 | 0.9612 | 0.9606 | 0.9507 |
| 0.0238 | 22.0 | 1188 | 0.2147 | {'precision': 0.9545454545454546, 'recall': 0.9545454545454546, 'f1': 0.9545454545454546, 'number': 22} | {'precision': 0.6739130434782609, 'recall': 0.6888888888888889, 'f1': 0.6813186813186812, 'number': 45} | {'precision': 0.7368421052631579, 'recall': 0.8615384615384616, 'f1': 0.7943262411347517, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8539325842696629, 'recall': 0.8351648351648352, 'f1': 0.8444444444444446, 'number': 91} | {'precision': 0.92, 'recall': 0.9583333333333334, 'f1': 0.9387755102040817, 'number': 24} | {'precision': 0.5, 'recall': 0.6666666666666666, 'f1': 0.5714285714285715, 'number': 6} | {'precision': 0.9733727810650887, 'recall': 0.9426934097421203, 'f1': 0.9577874818049491, 'number': 349} | {'precision': 0.4666666666666667, 'recall': 0.5833333333333334, 'f1': 0.5185185185185186, 'number': 24} | {'precision': 0.7586206896551724, 'recall': 0.9166666666666666, 'f1': 0.830188679245283, 'number': 24} | {'precision': 0.9981785063752276, 'recall': 0.9963636363636363, 'f1': 0.997270245677889, 'number': 550} | {'precision': 0.8571428571428571, 'recall': 0.9836065573770492, 'f1': 0.916030534351145, 'number': 61} | {'precision': 0.9491525423728814, 'recall': 0.9824561403508771, 'f1': 0.9655172413793103, 'number': 228} | {'precision': 0.996048349604835, 'recall': 0.974528087332272, 'f1': 0.9851707092769285, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.8302325581395349, 'recall': 0.9807692307692307, 'f1': 0.8992443324937028, 'number': 364} | {'precision': 0.5, 'recall': 0.2857142857142857, 'f1': 0.36363636363636365, 'number': 7} | 0.9677 | 0.9662 | 0.9669 | 0.9570 |
| 0.0246 | 23.0 | 1242 | 0.2211 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 45} | {'precision': 0.7536231884057971, 'recall': 0.8, 'f1': 0.7761194029850746, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8351648351648352, 'recall': 0.8351648351648352, 'f1': 0.8351648351648353, 'number': 91} | {'precision': 0.92, 'recall': 0.9583333333333334, 'f1': 0.9387755102040817, 'number': 24} | {'precision': 0.6, 'recall': 0.5, 'f1': 0.5454545454545454, 'number': 6} | {'precision': 0.9709302325581395, 'recall': 0.9570200573065902, 'f1': 0.963924963924964, 'number': 349} | {'precision': 0.4666666666666667, 'recall': 0.5833333333333334, 'f1': 0.5185185185185186, 'number': 24} | {'precision': 0.8148148148148148, 'recall': 0.9166666666666666, 'f1': 0.8627450980392156, 'number': 24} | {'precision': 1.0, 'recall': 0.9890909090909091, 'f1': 0.9945155393053016, 'number': 550} | {'precision': 0.8823529411764706, 'recall': 0.9836065573770492, 'f1': 0.9302325581395349, 'number': 61} | {'precision': 0.9535864978902954, 'recall': 0.9912280701754386, 'f1': 0.9720430107526881, 'number': 228} | {'precision': 0.9962790697674418, 'recall': 0.9743006595405959, 'f1': 0.9851672990686443, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 1.0, 'recall': 0.75, 'f1': 0.8571428571428571, 'number': 4} | {'precision': 0.8333333333333334, 'recall': 0.9752747252747253, 'f1': 0.8987341772151899, 'number': 364} | {'precision': 0.25, 'recall': 0.2857142857142857, 'f1': 0.26666666666666666, 'number': 7} | 0.9697 | 0.9665 | 0.9681 | 0.9584 |
| 0.0198 | 24.0 | 1296 | 0.2323 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.7111111111111111, 'recall': 0.7111111111111111, 'f1': 0.7111111111111111, 'number': 45} | {'precision': 0.7012987012987013, 'recall': 0.8307692307692308, 'f1': 0.76056338028169, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8387096774193549, 'recall': 0.8571428571428571, 'f1': 0.8478260869565217, 'number': 91} | {'precision': 0.92, 'recall': 0.9583333333333334, 'f1': 0.9387755102040817, 'number': 24} | {'precision': 0.4444444444444444, 'recall': 0.6666666666666666, 'f1': 0.5333333333333333, 'number': 6} | {'precision': 0.9790419161676647, 'recall': 0.9369627507163324, 'f1': 0.9575402635431919, 'number': 349} | {'precision': 0.5294117647058824, 'recall': 0.75, 'f1': 0.6206896551724139, 'number': 24} | {'precision': 0.7586206896551724, 'recall': 0.9166666666666666, 'f1': 0.830188679245283, 'number': 24} | {'precision': 1.0, 'recall': 0.9872727272727273, 'f1': 0.9935956084172004, 'number': 550} | {'precision': 0.8571428571428571, 'recall': 0.9836065573770492, 'f1': 0.916030534351145, 'number': 61} | {'precision': 0.9416666666666667, 'recall': 0.9912280701754386, 'f1': 0.9658119658119658, 'number': 228} | {'precision': 0.996046511627907, 'recall': 0.9740732317489197, 'f1': 0.9849373347131194, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.8215102974828375, 'recall': 0.9862637362637363, 'f1': 0.8963795255930088, 'number': 364} | {'precision': 0.25, 'recall': 0.2857142857142857, 'f1': 0.26666666666666666, 'number': 7} | 0.9658 | 0.9660 | 0.9659 | 0.9562 |
| 0.0172 | 25.0 | 1350 | 0.2257 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.717391304347826, 'recall': 0.7333333333333333, 'f1': 0.7252747252747253, 'number': 45} | {'precision': 0.7142857142857143, 'recall': 0.8461538461538461, 'f1': 0.7746478873239436, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8795180722891566, 'recall': 0.8021978021978022, 'f1': 0.8390804597701149, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.4444444444444444, 'recall': 0.6666666666666666, 'f1': 0.5333333333333333, 'number': 6} | {'precision': 0.9791044776119403, 'recall': 0.9398280802292264, 'f1': 0.9590643274853802, 'number': 349} | {'precision': 0.5428571428571428, 'recall': 0.7916666666666666, 'f1': 0.6440677966101694, 'number': 24} | {'precision': 0.8148148148148148, 'recall': 0.9166666666666666, 'f1': 0.8627450980392156, 'number': 24} | {'precision': 1.0, 'recall': 0.9963636363636363, 'f1': 0.9981785063752276, 'number': 550} | {'precision': 0.8695652173913043, 'recall': 0.9836065573770492, 'f1': 0.923076923076923, 'number': 61} | {'precision': 0.9658119658119658, 'recall': 0.9912280701754386, 'f1': 0.9783549783549784, 'number': 228} | {'precision': 0.992596020360944, 'recall': 0.9756652262906527, 'f1': 0.9840578047941277, 'number': 4397} | {'precision': 1.0, 'recall': 0.5, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 4} | {'precision': 0.8455882352941176, 'recall': 0.9478021978021978, 'f1': 0.8937823834196891, 'number': 364} | {'precision': 0.4, 'recall': 0.5714285714285714, 'f1': 0.47058823529411764, 'number': 7} | 0.9683 | 0.9668 | 0.9676 | 0.9584 |
| 0.0178 | 26.0 | 1404 | 0.2285 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.7555555555555555, 'recall': 0.7555555555555555, 'f1': 0.7555555555555555, 'number': 45} | {'precision': 0.7323943661971831, 'recall': 0.8, 'f1': 0.7647058823529411, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.8444444444444444, 'recall': 0.8351648351648352, 'f1': 0.839779005524862, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.8, 'recall': 0.6666666666666666, 'f1': 0.7272727272727272, 'number': 6} | {'precision': 0.973293768545994, 'recall': 0.9398280802292264, 'f1': 0.956268221574344, 'number': 349} | {'precision': 0.5294117647058824, 'recall': 0.75, 'f1': 0.6206896551724139, 'number': 24} | {'precision': 0.8461538461538461, 'recall': 0.9166666666666666, 'f1': 0.8799999999999999, 'number': 24} | {'precision': 1.0, 'recall': 0.9963636363636363, 'f1': 0.9981785063752276, 'number': 550} | {'precision': 0.8805970149253731, 'recall': 0.9672131147540983, 'f1': 0.9218749999999999, 'number': 61} | {'precision': 0.9696969696969697, 'recall': 0.9824561403508771, 'f1': 0.9760348583877996, 'number': 228} | {'precision': 0.998598785614199, 'recall': 0.9724812372071867, 'f1': 0.985366977762415, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 4} | {'precision': 0.8486997635933806, 'recall': 0.9862637362637363, 'f1': 0.9123252858958069, 'number': 364} | {'precision': 0.3333333333333333, 'recall': 0.42857142857142855, 'f1': 0.375, 'number': 7} | 0.9731 | 0.9662 | 0.9696 | 0.9595 |
| 0.0149 | 27.0 | 1458 | 0.2446 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 45} | {'precision': 0.75, 'recall': 0.8307692307692308, 'f1': 0.7883211678832116, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.8666666666666667, 'recall': 0.8571428571428571, 'f1': 0.861878453038674, 'number': 91} | {'precision': 0.92, 'recall': 0.9583333333333334, 'f1': 0.9387755102040817, 'number': 24} | {'precision': 0.8, 'recall': 0.6666666666666666, 'f1': 0.7272727272727272, 'number': 6} | {'precision': 0.9764705882352941, 'recall': 0.9512893982808023, 'f1': 0.9637155297532656, 'number': 349} | {'precision': 0.5428571428571428, 'recall': 0.7916666666666666, 'f1': 0.6440677966101694, 'number': 24} | {'precision': 0.88, 'recall': 0.9166666666666666, 'f1': 0.8979591836734694, 'number': 24} | {'precision': 1.0, 'recall': 0.9945454545454545, 'f1': 0.9972652689152233, 'number': 550} | {'precision': 0.855072463768116, 'recall': 0.9672131147540983, 'f1': 0.9076923076923077, 'number': 61} | {'precision': 0.9613733905579399, 'recall': 0.9824561403508771, 'f1': 0.9718004338394794, 'number': 228} | {'precision': 0.9848866498740554, 'recall': 0.9781669319990903, 'f1': 0.9815152898219991, 'number': 4397} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 4} | {'precision': 0.827906976744186, 'recall': 0.978021978021978, 'f1': 0.8967254408060452, 'number': 364} | {'precision': 0.5, 'recall': 0.7142857142857143, 'f1': 0.588235294117647, 'number': 7} | 0.9628 | 0.9714 | 0.9671 | 0.9562 |
| 0.0137 | 28.0 | 1512 | 0.2130 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.75, 'recall': 0.7333333333333333, 'f1': 0.7415730337078651, 'number': 45} | {'precision': 0.7534246575342466, 'recall': 0.8461538461538461, 'f1': 0.7971014492753623, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8681318681318682, 'recall': 0.8681318681318682, 'f1': 0.8681318681318682, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.8, 'recall': 0.6666666666666666, 'f1': 0.7272727272727272, 'number': 6} | {'precision': 0.9791044776119403, 'recall': 0.9398280802292264, 'f1': 0.9590643274853802, 'number': 349} | {'precision': 0.4666666666666667, 'recall': 0.5833333333333334, 'f1': 0.5185185185185186, 'number': 24} | {'precision': 0.8461538461538461, 'recall': 0.9166666666666666, 'f1': 0.8799999999999999, 'number': 24} | {'precision': 1.0, 'recall': 0.9963636363636363, 'f1': 0.9981785063752276, 'number': 550} | {'precision': 0.8571428571428571, 'recall': 0.9836065573770492, 'f1': 0.916030534351145, 'number': 61} | {'precision': 0.9612068965517241, 'recall': 0.9780701754385965, 'f1': 0.9695652173913044, 'number': 228} | {'precision': 0.99102416570771, 'recall': 0.979304070957471, 'f1': 0.9851292610386638, 'number': 4397} | {'precision': 1.0, 'recall': 0.5, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 4} | {'precision': 0.8395061728395061, 'recall': 0.9340659340659341, 'f1': 0.8842652795838751, 'number': 364} | {'precision': 0.4, 'recall': 0.5714285714285714, 'f1': 0.47058823529411764, 'number': 7} | 0.9681 | 0.9682 | 0.9682 | 0.9591 |
| 0.0122 | 29.0 | 1566 | 0.2236 | {'precision': 0.9545454545454546, 'recall': 0.9545454545454546, 'f1': 0.9545454545454546, 'number': 22} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 45} | {'precision': 0.8387096774193549, 'recall': 0.8, 'f1': 0.8188976377952757, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8617021276595744, 'recall': 0.8901098901098901, 'f1': 0.8756756756756756, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.7142857142857143, 'recall': 0.8333333333333334, 'f1': 0.7692307692307692, 'number': 6} | {'precision': 0.9761904761904762, 'recall': 0.9398280802292264, 'f1': 0.9576642335766423, 'number': 349} | {'precision': 0.4838709677419355, 'recall': 0.625, 'f1': 0.5454545454545454, 'number': 24} | {'precision': 0.8461538461538461, 'recall': 0.9166666666666666, 'f1': 0.8799999999999999, 'number': 24} | {'precision': 1.0, 'recall': 0.9963636363636363, 'f1': 0.9981785063752276, 'number': 550} | {'precision': 0.8571428571428571, 'recall': 0.9836065573770492, 'f1': 0.916030534351145, 'number': 61} | {'precision': 0.9533898305084746, 'recall': 0.9868421052631579, 'f1': 0.9698275862068965, 'number': 228} | {'precision': 0.9932777005099676, 'recall': 0.974528087332272, 'f1': 0.9838135690506256, 'number': 4397} | {'precision': 0.6666666666666666, 'recall': 1.0, 'f1': 0.8, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 4} | {'precision': 0.8452088452088452, 'recall': 0.945054945054945, 'f1': 0.8923476005188066, 'number': 364} | {'precision': 0.3333333333333333, 'recall': 0.2857142857142857, 'f1': 0.30769230769230765, 'number': 7} | 0.9708 | 0.9665 | 0.9686 | 0.9585 |
| 0.0118 | 30.0 | 1620 | 0.2360 | {'precision': 0.9545454545454546, 'recall': 0.9545454545454546, 'f1': 0.9545454545454546, 'number': 22} | {'precision': 0.7555555555555555, 'recall': 0.7555555555555555, 'f1': 0.7555555555555555, 'number': 45} | {'precision': 0.7341772151898734, 'recall': 0.8923076923076924, 'f1': 0.8055555555555556, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.8928571428571429, 'recall': 0.8241758241758241, 'f1': 0.8571428571428571, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.8, 'recall': 0.6666666666666666, 'f1': 0.7272727272727272, 'number': 6} | {'precision': 0.9544159544159544, 'recall': 0.9598853868194842, 'f1': 0.9571428571428571, 'number': 349} | {'precision': 0.4838709677419355, 'recall': 0.625, 'f1': 0.5454545454545454, 'number': 24} | {'precision': 0.8461538461538461, 'recall': 0.9166666666666666, 'f1': 0.8799999999999999, 'number': 24} | {'precision': 1.0, 'recall': 0.9963636363636363, 'f1': 0.9981785063752276, 'number': 550} | {'precision': 0.8571428571428571, 'recall': 0.9836065573770492, 'f1': 0.916030534351145, 'number': 61} | {'precision': 0.9576271186440678, 'recall': 0.9912280701754386, 'f1': 0.9741379310344828, 'number': 228} | {'precision': 0.9951388888888889, 'recall': 0.977712076415738, 'f1': 0.986348514397155, 'number': 4397} | {'precision': 0.6666666666666666, 'recall': 1.0, 'f1': 0.8, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 4} | {'precision': 0.8450704225352113, 'recall': 0.989010989010989, 'f1': 0.9113924050632911, 'number': 364} | {'precision': 0.5555555555555556, 'recall': 0.7142857142857143, 'f1': 0.6250000000000001, 'number': 7} | 0.9695 | 0.9727 | 0.9711 | 0.9614 |
| 0.0108 | 31.0 | 1674 | 0.2373 | {'precision': 0.9545454545454546, 'recall': 0.9545454545454546, 'f1': 0.9545454545454546, 'number': 22} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 45} | {'precision': 0.7534246575342466, 'recall': 0.8461538461538461, 'f1': 0.7971014492753623, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.8666666666666667, 'recall': 0.8571428571428571, 'f1': 0.861878453038674, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.8, 'recall': 0.6666666666666666, 'f1': 0.7272727272727272, 'number': 6} | {'precision': 0.9563953488372093, 'recall': 0.9426934097421203, 'f1': 0.9494949494949495, 'number': 349} | {'precision': 0.4666666666666667, 'recall': 0.5833333333333334, 'f1': 0.5185185185185186, 'number': 24} | {'precision': 0.8461538461538461, 'recall': 0.9166666666666666, 'f1': 0.8799999999999999, 'number': 24} | {'precision': 1.0, 'recall': 0.9963636363636363, 'f1': 0.9981785063752276, 'number': 550} | {'precision': 0.8571428571428571, 'recall': 0.9836065573770492, 'f1': 0.916030534351145, 'number': 61} | {'precision': 0.9613733905579399, 'recall': 0.9824561403508771, 'f1': 0.9718004338394794, 'number': 228} | {'precision': 0.9955936920222634, 'recall': 0.9763475096656812, 'f1': 0.9858766792972786, 'number': 4397} | {'precision': 0.6666666666666666, 'recall': 1.0, 'f1': 0.8, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 4} | {'precision': 0.8341121495327103, 'recall': 0.9807692307692307, 'f1': 0.9015151515151515, 'number': 364} | {'precision': 0.5555555555555556, 'recall': 0.7142857142857143, 'f1': 0.6250000000000001, 'number': 7} | 0.9695 | 0.9701 | 0.9698 | 0.9601 |
| 0.0088 | 32.0 | 1728 | 0.2458 | {'precision': 0.9545454545454546, 'recall': 0.9545454545454546, 'f1': 0.9545454545454546, 'number': 22} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 45} | {'precision': 0.7532467532467533, 'recall': 0.8923076923076924, 'f1': 0.8169014084507042, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.4444444444444444, 'recall': 0.6666666666666666, 'f1': 0.5333333333333333, 'number': 6} | {'precision': 0.9761904761904762, 'recall': 0.9398280802292264, 'f1': 0.9576642335766423, 'number': 349} | {'precision': 0.5, 'recall': 0.6666666666666666, 'f1': 0.5714285714285715, 'number': 24} | {'precision': 0.7857142857142857, 'recall': 0.9166666666666666, 'f1': 0.8461538461538461, 'number': 24} | {'precision': 1.0, 'recall': 0.9963636363636363, 'f1': 0.9981785063752276, 'number': 550} | {'precision': 0.855072463768116, 'recall': 0.9672131147540983, 'f1': 0.9076923076923077, 'number': 61} | {'precision': 0.9615384615384616, 'recall': 0.9868421052631579, 'f1': 0.974025974025974, 'number': 228} | {'precision': 0.9965059399021663, 'recall': 0.972936092790539, 'f1': 0.9845799769850403, 'number': 4397} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 4} | {'precision': 0.8267898383371824, 'recall': 0.9835164835164835, 'f1': 0.8983688833124215, 'number': 364} | {'precision': 0.5, 'recall': 0.5714285714285714, 'f1': 0.5333333333333333, 'number': 7} | 0.9692 | 0.9684 | 0.9688 | 0.9585 |
| 0.0088 | 33.0 | 1782 | 0.2371 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.7391304347826086, 'recall': 0.7555555555555555, 'f1': 0.7472527472527473, 'number': 45} | {'precision': 0.7777777777777778, 'recall': 0.8615384615384616, 'f1': 0.8175182481751826, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8666666666666667, 'recall': 0.8571428571428571, 'f1': 0.861878453038674, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.5555555555555556, 'recall': 0.8333333333333334, 'f1': 0.6666666666666667, 'number': 6} | {'precision': 0.9704142011834319, 'recall': 0.9398280802292264, 'f1': 0.9548762736535662, 'number': 349} | {'precision': 0.4666666666666667, 'recall': 0.5833333333333334, 'f1': 0.5185185185185186, 'number': 24} | {'precision': 0.8148148148148148, 'recall': 0.9166666666666666, 'f1': 0.8627450980392156, 'number': 24} | {'precision': 1.0, 'recall': 0.9963636363636363, 'f1': 0.9981785063752276, 'number': 550} | {'precision': 0.8823529411764706, 'recall': 0.9836065573770492, 'f1': 0.9302325581395349, 'number': 61} | {'precision': 0.9658119658119658, 'recall': 0.9912280701754386, 'f1': 0.9783549783549784, 'number': 228} | {'precision': 0.9928224125955082, 'recall': 0.9752103707073004, 'f1': 0.9839375860486461, 'number': 4397} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 4} | {'precision': 0.8285024154589372, 'recall': 0.9423076923076923, 'f1': 0.8817480719794345, 'number': 364} | {'precision': 0.3333333333333333, 'recall': 0.2857142857142857, 'f1': 0.30769230769230765, 'number': 7} | 0.9679 | 0.9665 | 0.9672 | 0.9578 |
| 0.0084 | 34.0 | 1836 | 0.2376 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 45} | {'precision': 0.8028169014084507, 'recall': 0.8769230769230769, 'f1': 0.8382352941176471, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.8777777777777778, 'recall': 0.8681318681318682, 'f1': 0.8729281767955802, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.5, 'recall': 0.8333333333333334, 'f1': 0.625, 'number': 6} | {'precision': 0.973293768545994, 'recall': 0.9398280802292264, 'f1': 0.956268221574344, 'number': 349} | {'precision': 0.5294117647058824, 'recall': 0.75, 'f1': 0.6206896551724139, 'number': 24} | {'precision': 0.8461538461538461, 'recall': 0.9166666666666666, 'f1': 0.8799999999999999, 'number': 24} | {'precision': 1.0, 'recall': 0.9963636363636363, 'f1': 0.9981785063752276, 'number': 550} | {'precision': 0.8571428571428571, 'recall': 0.9836065573770492, 'f1': 0.916030534351145, 'number': 61} | {'precision': 0.9658119658119658, 'recall': 0.9912280701754386, 'f1': 0.9783549783549784, 'number': 228} | {'precision': 0.9946672849524693, 'recall': 0.9756652262906527, 'f1': 0.9850746268656716, 'number': 4397} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 4} | {'precision': 0.8232558139534883, 'recall': 0.9725274725274725, 'f1': 0.8916876574307305, 'number': 364} | {'precision': 0.2, 'recall': 0.14285714285714285, 'f1': 0.16666666666666666, 'number': 7} | 0.9692 | 0.9698 | 0.9695 | 0.9605 |
| 0.0076 | 35.0 | 1890 | 0.2401 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.782608695652174, 'recall': 0.8, 'f1': 0.7912087912087912, 'number': 45} | {'precision': 0.7887323943661971, 'recall': 0.8615384615384616, 'f1': 0.8235294117647058, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.8695652173913043, 'recall': 0.8791208791208791, 'f1': 0.8743169398907105, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.4444444444444444, 'recall': 0.6666666666666666, 'f1': 0.5333333333333333, 'number': 6} | {'precision': 0.9735294117647059, 'recall': 0.9484240687679083, 'f1': 0.9608127721335269, 'number': 349} | {'precision': 0.5675675675675675, 'recall': 0.875, 'f1': 0.6885245901639344, 'number': 24} | {'precision': 0.88, 'recall': 0.9166666666666666, 'f1': 0.8979591836734694, 'number': 24} | {'precision': 1.0, 'recall': 0.9963636363636363, 'f1': 0.9981785063752276, 'number': 550} | {'precision': 0.8695652173913043, 'recall': 0.9836065573770492, 'f1': 0.923076923076923, 'number': 61} | {'precision': 0.9658119658119658, 'recall': 0.9912280701754386, 'f1': 0.9783549783549784, 'number': 228} | {'precision': 0.9962825278810409, 'recall': 0.9752103707073004, 'f1': 0.9856338351913574, 'number': 4397} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 4} | {'precision': 0.8412322274881516, 'recall': 0.9752747252747253, 'f1': 0.9033078880407124, 'number': 364} | {'precision': 0.2, 'recall': 0.14285714285714285, 'f1': 0.16666666666666666, 'number': 7} | 0.9716 | 0.9705 | 0.9710 | 0.9617 |
| 0.0071 | 36.0 | 1944 | 0.2417 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.7608695652173914, 'recall': 0.7777777777777778, 'f1': 0.7692307692307693, 'number': 45} | {'precision': 0.7534246575342466, 'recall': 0.8461538461538461, 'f1': 0.7971014492753623, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.8586956521739131, 'recall': 0.8681318681318682, 'f1': 0.8633879781420766, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.4, 'recall': 0.6666666666666666, 'f1': 0.5, 'number': 6} | {'precision': 0.9707602339181286, 'recall': 0.9512893982808023, 'f1': 0.9609261939218523, 'number': 349} | {'precision': 0.5, 'recall': 0.6666666666666666, 'f1': 0.5714285714285715, 'number': 24} | {'precision': 0.88, 'recall': 0.9166666666666666, 'f1': 0.8979591836734694, 'number': 24} | {'precision': 1.0, 'recall': 0.9963636363636363, 'f1': 0.9981785063752276, 'number': 550} | {'precision': 0.8823529411764706, 'recall': 0.9836065573770492, 'f1': 0.9302325581395349, 'number': 61} | {'precision': 0.9576271186440678, 'recall': 0.9912280701754386, 'f1': 0.9741379310344828, 'number': 228} | {'precision': 0.9953574744661096, 'recall': 0.9752103707073004, 'f1': 0.9851809304997128, 'number': 4397} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 4} | {'precision': 0.8321678321678322, 'recall': 0.9807692307692307, 'f1': 0.9003783102143758, 'number': 364} | {'precision': 0.3333333333333333, 'recall': 0.2857142857142857, 'f1': 0.30769230769230765, 'number': 7} | 0.9689 | 0.9698 | 0.9694 | 0.9601 |
| 0.0067 | 37.0 | 1998 | 0.2478 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 45} | {'precision': 0.7777777777777778, 'recall': 0.8615384615384616, 'f1': 0.8175182481751826, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.8666666666666667, 'recall': 0.8571428571428571, 'f1': 0.861878453038674, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.5555555555555556, 'recall': 0.8333333333333334, 'f1': 0.6666666666666667, 'number': 6} | {'precision': 0.9733727810650887, 'recall': 0.9426934097421203, 'f1': 0.9577874818049491, 'number': 349} | {'precision': 0.5428571428571428, 'recall': 0.7916666666666666, 'f1': 0.6440677966101694, 'number': 24} | {'precision': 0.88, 'recall': 0.9166666666666666, 'f1': 0.8979591836734694, 'number': 24} | {'precision': 1.0, 'recall': 0.9963636363636363, 'f1': 0.9981785063752276, 'number': 550} | {'precision': 0.8571428571428571, 'recall': 0.9836065573770492, 'f1': 0.916030534351145, 'number': 61} | {'precision': 0.9658119658119658, 'recall': 0.9912280701754386, 'f1': 0.9783549783549784, 'number': 228} | {'precision': 0.9960529370791734, 'recall': 0.9756652262906527, 'f1': 0.9857536764705882, 'number': 4397} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 4} | {'precision': 0.8380281690140845, 'recall': 0.9807692307692307, 'f1': 0.9037974683544303, 'number': 364} | {'precision': 0.3333333333333333, 'recall': 0.2857142857142857, 'f1': 0.30769230769230765, 'number': 7} | 0.9709 | 0.9703 | 0.9706 | 0.9613 |
| 0.0063 | 38.0 | 2052 | 0.2498 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 45} | {'precision': 0.7777777777777778, 'recall': 0.8615384615384616, 'f1': 0.8175182481751826, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.8666666666666667, 'recall': 0.8571428571428571, 'f1': 0.861878453038674, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.625, 'recall': 0.8333333333333334, 'f1': 0.7142857142857143, 'number': 6} | {'precision': 0.9733727810650887, 'recall': 0.9426934097421203, 'f1': 0.9577874818049491, 'number': 349} | {'precision': 0.4838709677419355, 'recall': 0.625, 'f1': 0.5454545454545454, 'number': 24} | {'precision': 0.88, 'recall': 0.9166666666666666, 'f1': 0.8979591836734694, 'number': 24} | {'precision': 1.0, 'recall': 0.9963636363636363, 'f1': 0.9981785063752276, 'number': 550} | {'precision': 0.8571428571428571, 'recall': 0.9836065573770492, 'f1': 0.916030534351145, 'number': 61} | {'precision': 0.9658119658119658, 'recall': 0.9912280701754386, 'f1': 0.9783549783549784, 'number': 228} | {'precision': 0.9960529370791734, 'recall': 0.9756652262906527, 'f1': 0.9857536764705882, 'number': 4397} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 4} | {'precision': 0.8352668213457076, 'recall': 0.989010989010989, 'f1': 0.9056603773584905, 'number': 364} | {'precision': 0.2, 'recall': 0.14285714285714285, 'f1': 0.16666666666666666, 'number': 7} | 0.9708 | 0.9700 | 0.9704 | 0.9607 |
| 0.0063 | 39.0 | 2106 | 0.2491 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.7608695652173914, 'recall': 0.7777777777777778, 'f1': 0.7692307692307693, 'number': 45} | {'precision': 0.7887323943661971, 'recall': 0.8615384615384616, 'f1': 0.8235294117647058, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.8666666666666667, 'recall': 0.8571428571428571, 'f1': 0.861878453038674, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.4, 'recall': 0.6666666666666666, 'f1': 0.5, 'number': 6} | {'precision': 0.9733727810650887, 'recall': 0.9426934097421203, 'f1': 0.9577874818049491, 'number': 349} | {'precision': 0.4838709677419355, 'recall': 0.625, 'f1': 0.5454545454545454, 'number': 24} | {'precision': 0.8461538461538461, 'recall': 0.9166666666666666, 'f1': 0.8799999999999999, 'number': 24} | {'precision': 1.0, 'recall': 0.9963636363636363, 'f1': 0.9981785063752276, 'number': 550} | {'precision': 0.8571428571428571, 'recall': 0.9836065573770492, 'f1': 0.916030534351145, 'number': 61} | {'precision': 0.9658119658119658, 'recall': 0.9912280701754386, 'f1': 0.9783549783549784, 'number': 228} | {'precision': 0.9951321279554938, 'recall': 0.9763475096656812, 'f1': 0.9856503271725405, 'number': 4397} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 4} | {'precision': 0.8412322274881516, 'recall': 0.9752747252747253, 'f1': 0.9033078880407124, 'number': 364} | {'precision': 0.3333333333333333, 'recall': 0.2857142857142857, 'f1': 0.30769230769230765, 'number': 7} | 0.9701 | 0.9697 | 0.9699 | 0.9603 |
| 0.0064 | 40.0 | 2160 | 0.2507 | {'precision': 0.9523809523809523, 'recall': 0.9090909090909091, 'f1': 0.9302325581395349, 'number': 22} | {'precision': 0.7608695652173914, 'recall': 0.7777777777777778, 'f1': 0.7692307692307693, 'number': 45} | {'precision': 0.7857142857142857, 'recall': 0.8461538461538461, 'f1': 0.8148148148148148, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | {'precision': 0.8666666666666667, 'recall': 0.8571428571428571, 'f1': 0.861878453038674, 'number': 91} | {'precision': 0.96, 'recall': 1.0, 'f1': 0.9795918367346939, 'number': 24} | {'precision': 0.4, 'recall': 0.6666666666666666, 'f1': 0.5, 'number': 6} | {'precision': 0.9733727810650887, 'recall': 0.9426934097421203, 'f1': 0.9577874818049491, 'number': 349} | {'precision': 0.4838709677419355, 'recall': 0.625, 'f1': 0.5454545454545454, 'number': 24} | {'precision': 0.8461538461538461, 'recall': 0.9166666666666666, 'f1': 0.8799999999999999, 'number': 24} | {'precision': 1.0, 'recall': 0.9963636363636363, 'f1': 0.9981785063752276, 'number': 550} | {'precision': 0.8571428571428571, 'recall': 0.9836065573770492, 'f1': 0.916030534351145, 'number': 61} | {'precision': 0.9698275862068966, 'recall': 0.9868421052631579, 'f1': 0.9782608695652174, 'number': 228} | {'precision': 0.9955926699141731, 'recall': 0.976120081874005, 'f1': 0.9857602204869087, 'number': 4397} | {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 4} | {'precision': 0.8416075650118203, 'recall': 0.978021978021978, 'f1': 0.9047013977128335, 'number': 364} | {'precision': 0.2, 'recall': 0.14285714285714285, 'f1': 0.16666666666666666, 'number': 7} | 0.9706 | 0.9692 | 0.9699 | 0.9603 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
dccuchile/albert-base-spanish-finetuned-pawsx | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | 2023-01-17T16:46:57Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1919.28 +/- 410.68
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dccuchile/albert-base-spanish-finetuned-pos | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | # auto-sd-paint-ext
Formerly known as `auto-sd-krita`.
> Extension for AUTOMATIC1111's webUI with Krita Plugin (other drawing studios soon?)
Outdated demo | New UI (TODO: demo image)
--- | ---
 | 
Why use this?
- Optimized workflow (txt2img, img2img, inpaint, outpaint, upscale) & UI design.
- Only drawing studio plugin that exposes the Script API.
- Easily create/save profiles (prompts, samplers, model, etc used).
- Some of the above isn't actually implemented yet.
## Quick Jump
- Full Installation & Workflow Tutorial Video! (Coming Soon...)
- [Installation Guide](https://github.com/Interpause/auto-sd-paint-ext/wiki/Install-Guide)
- [Usage Guide](https://github.com/Interpause/auto-sd-paint-ext/wiki/Usage-Guide)
- [Step by Step Guide to Better Inpainting](https://github.com/Interpause/auto-sd-paint-ext/wiki/Usage-Guide#inpainting-step-by-step)
- [Update Guide](https://github.com/Interpause/auto-sd-paint-ext/wiki/Update-Guide)
- [Features](https://github.com/Interpause/auto-sd-paint-ext/wiki/Features)
- [TODO](https://github.com/Interpause/auto-sd-paint-ext/wiki/TODO)
- [Contribution Guide](https://github.com/Interpause/auto-sd-paint-ext/wiki/Contribution-Guide)
(Outdated) Usage & Workflow Demo:
[](https://youtu.be/nP8MuRwcDN8 "Inpaint like a pro with Stable Diffusion! auto-sd-krita workflow guide")
### Differences from Video
- All webUI scripts have been tested to work!
- SD Upscale, Outpainting Mk 2, Img2Img Alt, etc
- Inpainting experience is better
- Inpaint mask is auto-hidden
- Better mask blur & inpaint full resolution technique than webUI
- UI no longer freezes during image update
- UI has been improved, takes up less space
- Error messages have been improved
## Breaking Changes
- The URL is different now, so reset "Backend URL" to default under the Config tab.
- It is now an AUTOMATIC1111 extension.
- Do <https://github.com/Interpause/auto-sd-krita/wiki/Quick-Switch-Using-Existing-AUTOMATIC1111-Install> in reverse for a quick fix.
- `krita_config.yaml` was renamed to `auto-sd-paint-ext-backend.yaml`.
## FAQ
Q: How does the base_size, max_size system work?
A:
It is an alternative to AUTO's highres fix that works for all modes, not just txt2img.
The selection will be resized such that the shorter dimension is base_size. However, if the aforementioned resize causes the longer dimension to exceed max_size, the shorter dimension will be resized to less than base_size. Setting base_size and max_size higher can be used to generate higher resolution images (along with their issues), essentially **disabling the system**, _though it might make sense for img2img mode_.
This is actually smarter than the builtin highres fix + firstphase width/height system. Thank the original plugin writer, @sddebz, for writing this.
<hr/>
Q: Outpainting tab?
A:
While the outpainting tab is still WIP, the outpainting scripts (under img2img tab) works perfectly fine! Alternatively, if you want more control over outpainting, you can:
1. Expand the canvas
2. Scribble in the newly added blank area
3. img2img on the blank area + some of the image
<hr/>
Q: Is the model loaded into memory twice?
A: No, it shares the same backend. Both the Krita plugin and webUI can be used concurrently.
<hr/>
Q: How can you commit to updating regularly?
A: It is easy for me.
<hr/>
Q: Will it work with other Krita plugin backends?
A: Unfortunately no, all plugins so far have different APIs. The official API is coming soon though...
## UI Changelog
See [CHANGELOG.md](./CHANGELOG.md) for the full changelog.
### 2022-12-28
- Added "Alt Dock Behaviour" under "SD Plugin Config".
- _Modifies default Krita dock behaviour!_
- Dragging title bar of docker now drags all stacked/tabbed dockers out instead of just one docker.
- Dragging the tab now drags the specific docker out instead of only re-arranging the tab.
- Enables floating stacked/tabbed dockers.
- Enables subdividing dock areas further.
- See: <https://doc.qt.io/qt-6/qmainwindow.html#DockOption-enum>
- All generations are added to group layer per batch with generation info.
- For batches of generations, all but the last image generated is hidden by default.
### 2022-12-20
- **UI Overhaul**: A few miscellaneous changes with some big ones:
- All tabs are now their own dockers to allow more flexibility in arranging.
- "Restore Defaults" will make all dockers re-appear and arrange themselves.
- Progress & number of pending requests now shown.
- All dropdowns now support searching, useful if your model checkpoint list is really long.
### 2022-12-04
- Add Interrupt button.
### 2022-11-15
- Scripts/features that increase the image size (Simple upscaling, SD upscaling, Outpaint Mk 2, etc) will now expand the canvas when image generation is complete **only if** _there is no active selection_.
- If there is a selection, the image will be scaled to fit the selection region.
- Using Ctrl+A to select the entire image is considered an active selection!
### 2022-11-08
- Inpainting is finally 100% fixed! No more weird borders. Blur works properly.
- Inpainting Full Resolution and Mask Blur were deemed obsolete and removed.
- See <https://github.com/Interpause/auto-sd-paint-ext/wiki/Usage-Guide#inpainting> on better ways to do so.
## Credits
- [@sddebz](https://github.com/sddebz) for writing the original backend API and Krita plugin while keeping the Gradio webUI functionality intact.
## License
MIT for the Krita Plugin backend server & frontend plugin. Code has been nearly completely rewritten compared to original plugin by now.
|
dccuchile/albert-base-spanish-finetuned-qa-mlqa | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2023-01-17T16:51:43Z | ---
license: apache-2.0
tags:
- classification
- generated_from_trainer
datasets:
- poem_sentiment
metrics:
- accuracy
model-index:
- name: clasificador-poem-sentiment
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: poem_sentiment
type: poem_sentiment
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9038461538461539
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-poem-sentiment
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the poem_sentiment dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5088
- Accuracy: 0.9038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 112 | 0.4324 | 0.8654 |
| No log | 2.0 | 224 | 0.4070 | 0.875 |
| No log | 3.0 | 336 | 0.5088 | 0.9038 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
dccuchile/albert-base-spanish-finetuned-xnli | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### test1 Dreambooth model trained by ukeeba with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
dccuchile/albert-large-spanish-finetuned-mldoc | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | 2023-02-09T06:17:35Z | ---
tags:
- generated_from_trainer
model-index:
- name: bert-fromscratch-galician-xlarge
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fromscratch-galician-xlarge
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.1588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000543633268612697
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.2129 | 0.34 | 1500 | 7.1660 |
| 7.1653 | 0.68 | 3000 | 7.1588 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.11.0
|
dccuchile/albert-large-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2023-01-17T17:02:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9151612903225806
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7773
- Accuracy: 0.9152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.293 | 1.0 | 318 | 3.2831 | 0.7432 |
| 2.6252 | 2.0 | 636 | 1.8743 | 0.8306 |
| 1.5406 | 3.0 | 954 | 1.1576 | 0.8939 |
| 1.0105 | 4.0 | 1272 | 0.8626 | 0.9094 |
| 0.7962 | 5.0 | 1590 | 0.7773 | 0.9152 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dccuchile/albert-large-spanish-finetuned-qa-mlqa | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2023-01-17T17:15:23Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -4.24 +/- 1.03
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dccuchile/albert-large-spanish-finetuned-xnli | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: cc-by-nc-sa-4.0
language:
- en
library_name: transformers
tags:
- finance
metrics:
- accuracy
---
## Model
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) trained on [Financial Documents Clustering Kaggle Dataset](https://www.kaggle.com/datasets/drcrabkg/financial-statements-clustering).
It classifies document images into one of the following (5) classes:
- Income Statements
- Balance Sheets
- Cash Flows
- Notes
- Others
## Training
This model uses OCR data from [EasyOCR](https://github.com/JaidedAI/EasyOCR) instead of the default Tesseract OCR engine.
## Libraries
- transformers 4.25.1
- pytorch-lightning 1.8.6
- torchmetrics 0.11.0
- easyocr 1.6.2 |
dccuchile/albert-tiny-spanish-finetuned-pawsx | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | 2023-01-17T17:25:20Z |
## Anything v4.5 Saturation Insert


## Anything v4.5

|
dccuchile/albert-tiny-spanish-finetuned-pos | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: mit
---
# Tokenizer for masked language modeling of DNA sequences
```json
"vocab": {
"[PAD]": 0,
"[MASK]": 1,
"[UNK]": 2,
"a": 3,
"c": 4,
"g": 5,
"t": 6
},
``` |
dccuchile/albert-tiny-spanish-finetuned-xnli | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | 2023-01-17T17:30:10Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: dmarcos/ppo-SnowballTarget1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dccuchile/albert-xlarge-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2023-01-17T17:34:18Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- AdamOswald1/autotrain-data-let
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.017109641157049823
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2932785109
- CO2 Emissions (in grams): 0.0171
## Validation Metrics
- Loss: 1.241
- Accuracy: 0.372
- Macro F1: 0.228
- Micro F1: 0.372
- Weighted F1: 0.344
- Macro Precision: 0.190
- Micro Precision: 0.372
- Weighted Precision: 0.337
- Macro Recall: 0.355
- Micro Recall: 0.372
- Weighted Recall: 0.372 |
dccuchile/albert-xlarge-spanish-finetuned-pawsx | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 24 | null | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- AdamOswald1/autotrain-data-let
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 3.216116887212137
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2932785111
- CO2 Emissions (in grams): 3.2161
## Validation Metrics
- Loss: 1.165
- Accuracy: 0.376
- Macro F1: 0.269
- Micro F1: 0.376
- Weighted F1: 0.349
- Macro Precision: 0.235
- Micro Precision: 0.376
- Weighted Precision: 0.354
- Macro Recall: 0.413
- Micro Recall: 0.376
- Weighted Recall: 0.376 |
dccuchile/albert-xlarge-spanish-finetuned-qa-mlqa | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2023-01-17T17:35:56Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 46.00 +/- 34.71
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
dccuchile/albert-xxlarge-spanish-finetuned-mldoc | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | ---
license: other
---
This model was created by mixing the dreamlike-art/dreamlike-diffusion-1.0 model with runwayML/stable-diffusion-v1-5-inpainting
Please see the original model card here: https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0 |
dccuchile/albert-xxlarge-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
dccuchile/albert-xxlarge-spanish-finetuned-pawsx | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | 2023-01-17T17:37:19Z | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
dccuchile/albert-xxlarge-spanish-finetuned-pos | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
dccuchile/albert-xxlarge-spanish-finetuned-qa-mlqa | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2023-01-17T17:37:21Z | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
dccuchile/albert-xxlarge-spanish-finetuned-xnli | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 68 | null | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
dccuchile/albert-base-spanish | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
]
| null | {
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 586 | 2023-01-17T17:37:23Z | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
dccuchile/albert-large-spanish | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
]
| null | {
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 75 | null | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
dccuchile/albert-tiny-spanish | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
]
| null | {
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 393 | 2023-01-17T17:37:25Z | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
dccuchile/albert-xlarge-spanish | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
]
| null | {
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 91 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### mattyTrained Dreambooth model trained by ymatty with [buildspace's DreamBooth](https://colab.research.google.com/github/buildspace/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) notebook
Build your own using the [AI Avatar project](https://buildspace.so/builds/ai-avatar)!
To get started head over to the [project dashboard](https://buildspace.so/p/build-ai-avatars).
Sample pictures of this concept:
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-pawsx | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 254.50 +/- 96.03
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jondister -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jondister -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jondister
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.05),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 2e-05),
('learning_starts', 100000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', False),
('policy', 'MlpPolicy'),
('target_update_interval', 1000),
('train_freq', 10),
('normalize', False)])
```
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-qa-mlqa | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2023-01-17T17:57:38Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 543.00 +/- 188.62
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bguisard -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bguisard -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga bguisard
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0002),
('learning_starts', 100000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-xnli | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: dmarcos/ppo-SnowballTarget2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.