repo_id
stringlengths 4
122
| author
stringlengths 2
38
⌀ | model_type
stringlengths 2
33
⌀ | files_per_repo
int64 2
39k
| downloads_30d
int64 0
33.7M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.87k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
33
⌀ | languages
stringlengths 2
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringlengths 6
258
⌀ | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
46
| prs_closed
int64 0
34
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 2
classes | has_text
bool 1
class | text_length
int64 201
598k
| readme
stringlengths 0
598k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
NihiLicA/q-FrozenLake-v1-4x4-noSlippery
|
NihiLicA
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 397 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="NihiLicA/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
acampillos/q-FrozenLake-v1-4x4-noSlippery
|
acampillos
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 399 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="acampillos/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Salesforce/blip2-opt-2.7b-coco
|
Salesforce
|
blip-2
| 12 | 15 |
transformers
| 0 |
image-to-text
| true | false | false |
mit
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['vision', 'image-to-text', 'image-captioning', 'visual-question-answering']
| false | true | true | 2,053 |
# BLIP-2, OPT-2.7b, fine-tuned on COCO
BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Intended uses & limitations
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
|
SRobbins/ppo-SnowballTarget
|
SRobbins
| null | 20 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 855 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: SRobbins/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Krud/microsoft_xtremedistil-l12-h384-uncased-TriviaQA
|
Krud
|
bert
| 14 | 10 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 958 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [microsoft/xtremedistil-l12-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l12-h384-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
NihiLicA/Taxi-v3
|
NihiLicA
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 362 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="NihiLicA/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
franjamonga/translate
|
franjamonga
|
marian
| 9 | 13 |
transformers
| 0 |
translation
| true | false | false |
apache-2.0
|
['es', 'en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,356 |
### spa-eng
* source group: Spanish
* target group: English
* OPUS readme: [spa-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-eng/README.md)
* model: transformer
* source language(s): spa
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.zip)
* test set translations: [opus-2020-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.test.txt)
* test set scores: [opus-2020-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-spaeng.spa.eng | 30.6 | 0.570 |
| news-test2008-spaeng.spa.eng | 27.9 | 0.553 |
| newstest2009-spaeng.spa.eng | 30.4 | 0.572 |
| newstest2010-spaeng.spa.eng | 36.1 | 0.614 |
| newstest2011-spaeng.spa.eng | 34.2 | 0.599 |
| newstest2012-spaeng.spa.eng | 37.9 | 0.624 |
| newstest2013-spaeng.spa.eng | 35.3 | 0.609 |
| Tatoeba-test.spa.eng | 59.6 | 0.739 |
### System Info:
- hf_name: spa-eng
- source_languages: spa
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'en']
- src_constituents: {'spa'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.test.txt
- src_alpha3: spa
- tgt_alpha3: eng
- short_pair: es-en
- chrF2_score: 0.7390000000000001
- bleu: 59.6
- brevity_penalty: 0.9740000000000001
- ref_len: 79376.0
- src_name: Spanish
- tgt_name: English
- train_date: 2020-08-18 00:00:00
- src_alpha2: es
- tgt_alpha2: en
- prefer_old: False
- long_pair: spa-eng
- helsinki_git_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82
- transformers_git_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9
- port_machine: brutasse
- port_time: 2020-08-24-18:20
|
acampillos/q-Taxi-v3
|
acampillos
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 366 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="acampillos/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
virto/mt5-small-finetuned-rabbi-kook
|
virto
|
mt5
| 11 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,077 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-rabbi-kook
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 223 | 6.4428 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1
- Datasets 2.9.0
- Tokenizers 0.11.0
|
jannikskytt/Reinforce-CartPole-v1
|
jannikskytt
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
HuggingFaceH4/bloomz-7b1
|
HuggingFaceH4
|
bloom
| 8 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
bigscience-bloom-rail-1.0
|
['ak', 'ar', 'as', 'bm', 'bn', 'ca', 'code', 'en', 'es', 'eu', 'fon', 'fr', 'gu', 'hi', 'id', 'ig', 'ki', 'kn', 'lg', 'ln', 'ml', 'mr', 'ne', 'nso', 'ny', 'or', 'pa', 'pt', 'rn', 'rw', 'sn', 'st', 'sw', 'ta', 'te', 'tn', 'ts', 'tum', 'tw', 'ur', 'vi', 'wo', 'xh', 'yo', 'zh', 'zu']
|
['bigscience/xP3']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| true | true | true | 9,473 |

# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Limitations](#limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
7. [Citation](#citation)
# Model Summary
> We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages.
- **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
- **Languages:** Refer to [bloom](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
- **BLOOMZ & mT0 Model Family:**
<div class="max-w-full overflow-auto">
<table>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English.
</tr>
<tr>
<td>Parameters</td>
<td>300M</td>
<td>580M</td>
<td>1.2B</td>
<td>3.7B</td>
<td>13B</td>
<td>560M</td>
<td>1.1B</td>
<td>1.7B</td>
<td>3B</td>
<td>7.1B</td>
<td>176B</td>
</tr>
<tr>
<td>Finetuned Model</td>
<td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
</tr>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td>
</tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td>
</tr>
<th colspan="12">Original pretrained checkpoints. Not recommended.</th>
<tr>
<td>Pretrained Model</td>
<td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td>
<td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td>
<td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td>
<td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td>
<td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td>
</tr>
</table>
</div>
# Use
## Intended use
We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper:
- 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
- Suggest at least five related search terms to "Mạng neural nhân tạo".
- Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
- Explain in a sentence in Telugu what is backpropagation in neural networks.
**Feel free to share your generations in the Community tab!**
## How to use
### CPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-7b1"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-7b1"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU in 8bit
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-7b1"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
<!-- Necessary for whitespace -->
###
# Limitations
**Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*".
# Training
## Model
- **Architecture:** Same as [bloom-7b1](https://huggingface.co/bigscience/bloom-7b1), also refer to the `config.json` file
- **Finetuning steps:** 1000
- **Finetuning tokens:** 4.19 billion
- **Finetuning layout:** 1x pipeline parallel, 1x tensor parallel, 64x data parallel
- **Precision:** float16
## Hardware
- **CPUs:** AMD CPUs with 512GB memory per node
- **GPUs:** 64 A100 80GB GPUs with 8 GPUs per node (8 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links
- **Communication:** NCCL-communications network with a fully dedicated subnet
## Software
- **Orchestration:** [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed)
- **Optimizer & parallelism:** [DeepSpeed](https://github.com/microsoft/DeepSpeed)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) (pytorch-1.11 w/ CUDA-11.5)
- **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# Evaluation
We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
# Citation
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
sanali209/imclasif-genres-v001
|
sanali209
|
vit
| 6 | 106 |
transformers
| 0 |
image-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'pytorch', 'huggingpics']
| false | true | true | 332 |
# imclasif-genres-v001
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
|
Salesforce/blip2-opt-6.7b-coco
|
Salesforce
|
blip-2
| 14 | 11 |
transformers
| 0 |
image-to-text
| true | false | false |
mit
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['vision', 'image-to-text', 'image-captioning', 'visual-question-answering']
| false | true | true | 2,053 |
# BLIP-2, OPT-6.7b, fine-tuned on COCO
BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Intended uses & limitations
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
|
lyk0013/distilbert-finetuned-imdb
|
lyk0013
|
distilbert
| 10 | 0 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,160 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9507 | 1.0 | 13 | 2.5946 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
EdenYav/q-FrozenLake-v1-4x4-noSlippery
|
EdenYav
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 396 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="EdenYav/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
apatidar0/bert-finetuned-squad
|
apatidar0
|
bert
| 12 | 9 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 930 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
vvn0/Reinforce-Pixelcopter-PLE-v0
|
vvn0
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
RMAV/q-FrozenLake-v1-4x4-noSlippery
|
RMAV
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 393 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="RMAV/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
EdenYav/Taxi-v3
|
EdenYav
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 361 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="EdenYav/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Salesforce/blip2-flan-t5-xl-coco
|
Salesforce
|
blip-2
| 11 | 7 |
transformers
| 1 |
image-to-text
| true | false | false |
mit
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['vision', 'image-to-text', 'image-captioning', 'visual-question-answering']
| false | true | true | 2,029 |
# BLIP-2, Flan T5-xl, fine-tuned on COCO
BLIP-2 model, leveraging [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Intended uses & limitations
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
|
LarsLary/xtremedistil-l12-h384-uncased-SQuAD-trained
|
LarsLary
|
bert
| 15 | 5 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 957 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [microsoft/xtremedistil-l12-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l12-h384-uncased) on the SQuAD dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
sanali209/imclasif-quality-v001
|
sanali209
|
vit
| 8 | 83 |
transformers
| 0 |
image-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'pytorch', 'huggingpics']
| false | true | true | 333 |
# imclasif-quality-v001
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
|
RMAV/taxi-driver
|
RMAV
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 362 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="RMAV/taxi-driver", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sakistriker/DreamLikeSamKuvshinov
|
sakistriker
| null | 22 | 15 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
| false | true | true | 4,289 |
# DreamLikeSamKuvshinov
This is a safetensors Diffusers conversion of the model [DreamLikeSamKuvshinov](https://civitai.com/models/1473/dreamlikesamkuvshinov) created by [mattgroy](https://civitai.com/user/mattgroy).
## Model Description (from CivitAI)
A mixture of [Dreamlike Diffusion 1.0](https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0), [SamDoesArt V3](https://huggingface.co/Sandro-Halpo/SamDoesArt-V3) and [Kuvshinov style](https://civitai.com/models/1231/kuvshinov-style) models.
Created mostly for exploring different character concepts with a focus on drawings, but the mix happened to be pretty good at realistic-ish images, all thanks to wonderful models that it uses.
### Prompt Trigger Words
- dreamlikeart
- samdoesart
- kuvshinov
## Examples (from CivitAI)
### Example 1

```
Positive:
(samdoesart:1.2) dreamlikeart beautiful young lady, long hair, perfect face, cyberpunk, masterpiece, intense shadows, ambient light, illustration, thick outlines, highres, drawn by Greg Rutkowski, (Yoji Shinkawa:1.1), (kuvshinov:0.9), <kuvshinov>
Negative:
(ugly:1.5), (duplicate:1.3), (morbid:1.2), (mutilated:1.2), (mutation), [out of frame], (extra fingers:1.2), (more than two arms), (more than two legs), (missing arms), (missing legs), (poorly drawn hands:1.3), (poorly drawn face:1.3), (deformed:1.2), blurry, (bad anatomy), (bad proportions), (disfigured:1.3), extra limbs, (malformed limbs), mutated hands, (fused fingers), (makeup)
Steps: 50
CFG Scale: 10
Seed: 1736018256
Sampler: DPM++ 2M Karras
```
### Example 2

```
Positive:
(samdoesart:1.1) (dreamlikeart:1) long shot of a beautiful young lady, short hair, (medieval:1.2), masterpiece, intense shadows, ambient light, illustration, (thick outlines:1.2), cartoon, highres, drawn by Inoue Takehiko, (Yoji Shinkawa:1.1), (kuvshinov:1)
Negative:
(ugly:1.5), (duplicate:1.3), (morbid:1.2), (mutilated:1.2), (mutation), [out of frame], (extra fingers:1.2), (more than two arms), (more than two legs), (missing arms), (missing legs), (poorly drawn hands:1.3), (poorly drawn face:1.3), (deformed:1.2), blurry, (bad anatomy), (bad proportions), (disfigured:1.3), extra limbs, (malformed limbs), mutated hands, (fused fingers), (makeup)
Steps: 50
CFG Scale: 5
Seed: 2598035884
Sampler: DPM++ 2M Karras
```
### Example 3

```
Positive:
(samdoesart:1.1) (dreamlikeart:1) full body portrait of a beautiful young lady, short hair, (medieval:1.2), masterpiece, intense shadows, ambient light, illustration, (thick outlines:1.2), cartoon, highres, drawn by Inoue Takehiko, (Yoji Shinkawa:0.9), Jakub Rozalski, (kuvshinov:1)
Negative:
(ugly:1.5), (duplicate:1.3), (morbid:1.2), (mutilated:1.2), (mutation), [out of frame], (extra fingers:1.2), (more than two arms), (more than two legs), (missing arms), (missing legs), (poorly drawn hands:1.3), (poorly drawn face:1.3), (deformed:1.2), blurry, (bad anatomy), (bad proportions), (disfigured:1.3), extra limbs, (malformed limbs), mutated hands, (fused fingers), (makeup)
Steps: 50
CFG Scale: 5
Seed: 3399343143
Sampler: DPM++ 2M Karras
```
### Example 4

```
Positive:
(samdoesart:1) (dreamlikeart:1) full body portrait of a beautiful young lady, curly hair, (sci-fi:1.2), masterpiece, intense shadows, ambient light, illustration, (thick outlines:1.2), cartoon, highres, drawn by Inoue Takehiko, (Diego Dayer:0.9), Jakub Rozalski, (kuvshinov:1) <kuvshinov>
Negative:
(ugly:1.5), (duplicate:1.3), (morbid:1.2), (mutilated:1.2), (mutation), [out of frame], (extra fingers:1.2), (more than two arms), (more than two legs), (missing arms), (missing legs), (poorly drawn hands:1.3), (poorly drawn face:1.3), (deformed:1.2), blurry, (bad anatomy), (bad proportions), (disfigured:1.3), extra limbs, (malformed limbs), mutated hands, (fused fingers), (makeup)
Steps: 50
CFG Scale: 7
Seed: 4289815823
Sampler: DPM++ 2M Karras
```
## Closing
All credit goes to the original model's author.
|
vvn0/ppo-SnowballTarget
|
vvn0
| null | 20 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 851 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: vvn0/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
EgilKarlsen/ApacheBertBaseCase
|
EgilKarlsen
|
bert
| 9 | 10 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,246 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ApacheBertBaseCase
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.2938 | 1.0 | 20881 | 0.2663 |
| 0.2345 | 2.0 | 41762 | 0.2134 |
| 0.2182 | 3.0 | 62643 | 0.2008 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
frangiral/q-FrozenLake-v1-4x4-noSlippery
|
frangiral
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 398 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="frangiral/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sakistriker/XperoEnd1essModel
|
sakistriker
| null | 21 | 37 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
| false | true | true | 4,595 |
# Xpero End1ess Model
This is a safetensors Diffusers conversion of the model [Xpero End1ess Model](https://civitai.com/models/6231/xpero-end1ess-model) created by [xpero](https://civitai.com/user/xpero).
## Model Description (from CivitAI)
This model is a custom blend of various models, presenting many options for generating images, including NSFW. Based on Stable Diffusion 1.5. Primarily focused on the creation of digital art characters. Can easily generate great images in different styles - characters, illustration, anime etc.
The model is demanding for VAE, I use [vae-ft-mse-840000-ema-pruned](https://huggingface.co/stabilityai/sd-vae-ft-mse).
## Examples (from CivitAI)
### Example 1

```
Positive:
a woman in a white top and pink shorts is standing next to a bike with a yellow background and, by Quentin Tarantino, 1girl, bracelet, brown_hair, jewelry, letterboxed, lips, long_hair, makeup, medium_breasts, midriff, nail_polish, navel, necklace, nose, orange_sky, pink_shorts, realistic, shorts, solo, sun, sunset, tattoo,wristband, yellow_background, yellow_sky, beautiful detailed glow, detailed, Cinematic light, intricate detail, highres, detailed facial features, high detail, sharp focus, smooth, aesthetic, extremely detailed, stamp, octane render,
Negative:
multiple people, lowres, bad anatomy, bad hands, text, error, missing fingers,extra digit, fewer digits, cropped, worstquality, low quality, normal quality,jpegartifacts,signature, watermark, username,blurry,bad feet,cropped,poorly drawn hands,poorly drawn face,mutation,deformed,worst quality,low quality,normal quality,jpeg artifacts,signature,watermark,extra fingers,fewer digits,extra limbs,extra arms,extra legs,malformed limbs,fused fingers,too many fingers,long neck,cross-eyed,mutated hands,polar lowres,bad body,bad proportions,gross proportions,text,error,missing fingers,missing arms,missing legs,extra digit
Steps: 22
CFG Scale: 8
Seed: 1116627766
Sampler: Euler-a
```
### Example 2

```
Positive:
a woman with blue disheveled hair and piercings, with a dark and a black background, Charlie Bowater, stanley artgerm lau, a character portrait, sots art, sharp focus, smooth, aesthetic, extremely detailed, octane render, 1girl, black_choker, blue_eyes, blurry, choker, earrings, jewelry, lips, nose, piercing, realistic, short_hair, solo, dark industrial background, rtx, upper body, dark makeup, elegant pose, rock clothes, tattoo on arm, leather shirt, navel, beautiful detailed glow, detailed, Cinematic light, intricate detail, highres, detailed facial features, high detail
Negative:
multiple people, lowres, bad anatomy, bad hands, text, error, missing fingers,extra digit, fewer digits, cropped, worstquality, low quality, normal quality,jpegartifacts,signature, watermark, username,blurry,bad feet,cropped,poorly drawn hands,poorly drawn face,mutation,deformed,worst quality,low quality,normal quality,jpeg artifacts,signature,watermark,extra fingers,fewer digits,extra limbs,extra arms,extra legs,malformed limbs,fused fingers,too many fingers,long neck,cross-eyed,mutated hands,polar lowres,bad body,bad proportions,gross proportions,text,error,missing fingers,missing arms,missing legs,extra digit
Steps: 22
CFG Scale: 8
Seed: 4281920120
Sampler: Euler-a
```
### Example 3

```
Positive:
1/2 portrait of beautiful rock girl, punk, slim body, beautiful detailed glow, highres, high detail, smooth, aesthetic, extremely detailed, octane render, detailed facial features, sharp focus, rtx, ambient light, intricate city background, pink hair, expressive eyes
Negative:
multiple people, lowres, bad anatomy, bad hands, text, error, missing fingers,extra digit, fewer digits, cropped, worstquality, low quality, normal quality,jpegartifacts,signature, watermark, username,blurry,bad feet,cropped,poorly drawn hands,poorly drawn face,mutation,deformed,worst quality,low quality,normal quality,jpeg artifacts,signature,watermark,extra fingers,fewer digits,extra limbs,extra arms,extra legs,malformed limbs,fused fingers,too many fingers,long neck,cross-eyed,mutated hands,polar lowres,bad body,bad proportions,gross proportions,text,error,missing fingers,missing arms,missing legs,extra digit
Steps: 24
CFG Scale: 7.5
Seed: 1452262543
Sampler: Euler-a
```
## Closing
All credit goes to the original model's author.
|
Allen-Young/xtremedistil-l6-h256-uncased-nlp4web1
|
Allen-Young
|
bert
| 15 | 52 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,030 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on a [NewsQA](https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/NewsQA.jsonl.gz) dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
adsjklfsd/pegasus-samsum
|
adsjklfsd
|
pegasus
| 22 | 0 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null |
['samsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,009 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Framework versions
- Transformers 4.26.0
- Pytorch 1.10.1+cu113
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_data_aug_stsb_256
|
gokuls
|
distilbert
| 17 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,994 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_data_aug_stsb_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4500
- Pearson: 0.1761
- Spearmanr: 0.1778
- Combined Score: 0.1770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 0.5832 | 1.0 | 1259 | 1.5244 | 0.1737 | 0.1803 | 0.1770 |
| 0.2202 | 2.0 | 2518 | 1.4500 | 0.1761 | 0.1778 | 0.1770 |
| 0.1249 | 3.0 | 3777 | 1.4720 | 0.1743 | 0.1782 | 0.1762 |
| 0.0822 | 4.0 | 5036 | 1.5790 | 0.1581 | 0.1658 | 0.1619 |
| 0.0611 | 5.0 | 6295 | 1.4750 | 0.1850 | 0.1905 | 0.1878 |
| 0.0477 | 6.0 | 7554 | 1.5776 | 0.1612 | 0.1694 | 0.1653 |
| 0.0394 | 7.0 | 8813 | 1.5512 | 0.1648 | 0.1694 | 0.1671 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jannikskytt/Reinforce-PixelCopter
|
jannikskytt
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
frangiral/Taxi-v3-Try1
|
frangiral
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 368 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="frangiral/Taxi-v3-Try1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
EgilKarlsen/ApacheRoberta
|
EgilKarlsen
|
roberta
| 8 | 13 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,235 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ApacheRoberta
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.3618 | 1.0 | 18539 | 0.3174 |
| 0.2826 | 2.0 | 37078 | 0.2560 |
| 0.2633 | 3.0 | 55617 | 0.2315 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
EgilKarlsen/ApacheGPT2
|
EgilKarlsen
|
gpt2
| 9 | 10 |
transformers
| 0 |
text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,240 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ApacheGPT2
This model is a fine-tuned version of [EgilKarlsen/gpt2](https://huggingface.co/EgilKarlsen/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4368 | 1.0 | 18303 | 0.4028 |
| 0.3919 | 2.0 | 36606 | 0.3600 |
| 0.3766 | 3.0 | 54909 | 0.3480 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
BenjaminB/test-skops-card-creator-02
|
BenjaminB
| null | 6 | 0 |
sklearn
| 0 |
tabular-classification
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sklearn', 'skops', 'tabular-classification', 'art']
| false | true | true | 5,910 |
# Model description
[More Information Needed]
## Intended uses & limitations
### Only for cats

### And for birbs

## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|------------------|---------|
| constant | |
| random_state | |
| strategy | prior |
</details>
### Model Plot
The model plot is below.
<style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>DummyClassifier()</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" checked><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">DummyClassifier</label><div class="sk-toggleable__content"><pre>DummyClassifier()</pre></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|---------|
| accuracy | 0.9 |
# How to Get Started with the Model
[More Information Needed]
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
```
|
Kuray107/RATS_clean
|
Kuray107
|
wav2vec2
| 27 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 4,095 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RATS_clean
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3775
- Wer: 0.1324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.4025 | 0.42 | 1000 | 0.3019 | 0.1434 |
| 0.3939 | 0.85 | 2000 | 0.3044 | 0.1377 |
| 0.3808 | 1.27 | 3000 | 0.3053 | 0.1417 |
| 0.377 | 1.7 | 4000 | 0.3213 | 0.1414 |
| 0.3616 | 2.12 | 5000 | 0.2976 | 0.1449 |
| 0.3519 | 2.55 | 6000 | 0.3266 | 0.1482 |
| 0.3532 | 2.97 | 7000 | 0.3365 | 0.1455 |
| 0.3377 | 3.4 | 8000 | 0.3170 | 0.1409 |
| 0.3321 | 3.82 | 9000 | 0.3070 | 0.1398 |
| 0.3199 | 4.25 | 10000 | 0.3123 | 0.1366 |
| 0.322 | 4.67 | 11000 | 0.3166 | 0.1378 |
| 0.3085 | 5.1 | 12000 | 0.3492 | 0.1448 |
| 0.2954 | 5.52 | 13000 | 0.3173 | 0.1387 |
| 0.3003 | 5.95 | 14000 | 0.3341 | 0.1442 |
| 0.2863 | 6.37 | 15000 | 0.3124 | 0.1382 |
| 0.2887 | 6.8 | 16000 | 0.3273 | 0.1404 |
| 0.2777 | 7.22 | 17000 | 0.3291 | 0.1399 |
| 0.2728 | 7.65 | 18000 | 0.3326 | 0.1352 |
| 0.2726 | 8.07 | 19000 | 0.3443 | 0.1355 |
| 0.2519 | 8.5 | 20000 | 0.3448 | 0.1400 |
| 0.3256 | 8.92 | 21000 | 0.3274 | 0.1396 |
| 0.3174 | 9.35 | 22000 | 0.3210 | 0.1411 |
| 0.3075 | 9.77 | 23000 | 0.3319 | 0.1410 |
| 0.2982 | 10.2 | 24000 | 0.3463 | 0.1424 |
| 0.2927 | 10.62 | 25000 | 0.3445 | 0.1399 |
| 0.3027 | 11.05 | 26000 | 0.3488 | 0.1410 |
| 0.2835 | 11.47 | 27000 | 0.3639 | 0.1371 |
| 0.2767 | 11.89 | 28000 | 0.3467 | 0.1391 |
| 0.2792 | 12.32 | 29000 | 0.3459 | 0.1358 |
| 0.2701 | 12.74 | 30000 | 0.3425 | 0.1351 |
| 0.269 | 13.17 | 31000 | 0.3790 | 0.1421 |
| 0.2567 | 13.59 | 32000 | 0.3613 | 0.1352 |
| 0.2626 | 14.02 | 33000 | 0.3586 | 0.1396 |
| 0.2436 | 14.44 | 34000 | 0.3694 | 0.1369 |
| 0.2557 | 14.87 | 35000 | 0.3715 | 0.1321 |
| 0.2468 | 15.29 | 36000 | 0.3970 | 0.1348 |
| 0.2463 | 15.72 | 37000 | 0.3675 | 0.1304 |
| 0.2362 | 16.14 | 38000 | 0.3690 | 0.1377 |
| 0.2388 | 16.57 | 39000 | 0.3775 | 0.1310 |
| 0.2314 | 16.99 | 40000 | 0.3601 | 0.1326 |
| 0.2315 | 17.42 | 41000 | 0.3633 | 0.1322 |
| 0.2334 | 17.84 | 42000 | 0.3794 | 0.1356 |
| 0.2255 | 18.27 | 43000 | 0.3670 | 0.1316 |
| 0.2222 | 18.69 | 44000 | 0.3778 | 0.1341 |
| 0.225 | 19.12 | 45000 | 0.3708 | 0.1331 |
| 0.2209 | 19.54 | 46000 | 0.3807 | 0.1332 |
| 0.2216 | 19.97 | 47000 | 0.3775 | 0.1324 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Eulalye921/test_trainer
|
Eulalye921
|
bert
| 9 | 8 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['yelp_review_full']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,320 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0183
- Accuracy: 0.586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Mandoryan/DQN-LunarLander-v2
|
Mandoryan
| null | 19 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
erniechiew/a2c-AntBulletEnv-v0
|
erniechiew
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DL82/remylacroix
|
DL82
| null | 41 | 4 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 2,734 |
### remylacroix Dreambooth model trained by DL82 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
remylacroix (use that on your prompt)

|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_data_aug_wnli_256
|
gokuls
|
distilbert
| 17 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,636 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_data_aug_wnli_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5279
- Accuracy: 0.1549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3422 | 1.0 | 218 | 0.5279 | 0.1549 |
| 0.305 | 2.0 | 436 | 0.5961 | 0.1268 |
| 0.291 | 3.0 | 654 | 0.6364 | 0.0845 |
| 0.2816 | 4.0 | 872 | 0.6604 | 0.0986 |
| 0.2744 | 5.0 | 1090 | 0.6627 | 0.0845 |
| 0.2686 | 6.0 | 1308 | 0.6618 | 0.0986 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
deepneuralnet/lll
|
deepneuralnet
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SRobbins/ppo-Pyramids_Training
|
SRobbins
| null | 20 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
| false | true | true | 840 |
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: SRobbins/ppo-Pyramids_Training
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pfunk/Pong-v4-DQPN_p30_pt0.1-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 1,990 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p30_pt0.1.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p30_pt0.1]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p30_pt0.1 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p30_pt0.1-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p30_pt0.1-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p30_pt0.1-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p30_pt0.1 --start-policy-f 30000 --end-policy-f 30000 --evaluation-fraction 1.00 --target-tau 1.0 --policy-tau 0.1 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 30000,
'env_id': 'Pong-v4',
'evaluation_fraction': 1.0,
'exp_name': 'DQPN_p30_pt0.1',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 0.1,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 30000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
virto/mt5-base-finetuned-rabbi-kook
|
virto
|
mt5
| 11 | 5 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,201 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-rabbi-kook
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2102 | 1.0 | 3567 | 2.4526 |
| 3.0283 | 2.0 | 7134 | 2.3861 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.11.0
|
SRobbins/q-Taxi-v3
|
SRobbins
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 364 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="SRobbins/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_data_aug_mnli_256
|
gokuls
|
distilbert
| 17 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,652 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_data_aug_mnli_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5082
- Accuracy: 0.6312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.5216 | 1.0 | 31440 | 0.5047 | 0.6315 |
| 0.4566 | 2.0 | 62880 | 0.5097 | 0.6383 |
| 0.4188 | 3.0 | 94320 | 0.5243 | 0.6361 |
| 0.3943 | 4.0 | 125760 | 0.5328 | 0.6346 |
| 0.3777 | 5.0 | 157200 | 0.5345 | 0.6300 |
| 0.3658 | 6.0 | 188640 | 0.5392 | 0.6318 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
pfunk/Pong-v4-DQPN_p50_pt0.1_tt0.1-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,038 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p50_pt0.1_tt0.1.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p50_pt0.1_tt0.1]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p50_pt0.1_tt0.1 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_pt0.1_tt0.1-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_pt0.1_tt0.1-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_pt0.1_tt0.1-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p50_pt0.1_tt0.1 --start-policy-f 50000 --end-policy-f 50000 --evaluation-fraction 1.00 --target-tau 0.1 --policy-tau 0.1 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 50000,
'env_id': 'Pong-v4',
'evaluation_fraction': 1.0,
'exp_name': 'DQPN_p50_pt0.1_tt0.1',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 0.1,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 50000,
'target_network_frequency': 1000,
'target_tau': 0.1,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
qgallouedec/ppo-MiniGrid-DoorKey-5x5-v0
|
qgallouedec
| null | 16 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['MiniGrid-DoorKey-5x5-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,094 |
# **PPO** Agent playing **MiniGrid-DoorKey-5x5-v0**
This is a trained model of a **PPO** agent playing **MiniGrid-DoorKey-5x5-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env MiniGrid-DoorKey-5x5-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo --env MiniGrid-DoorKey-5x5-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env MiniGrid-DoorKey-5x5-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo --env MiniGrid-DoorKey-5x5-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env MiniGrid-DoorKey-5x5-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env MiniGrid-DoorKey-5x5-v0 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('clip_range', 0.2),
('ent_coef', 0.0),
('env_wrapper', 'gym_minigrid.wrappers.FlatObsWrapper'),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.00025),
('n_envs', 8),
('n_epochs', 10),
('n_steps', 128),
('n_timesteps', 100000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
0RisingStar0/HighRiseMixV1
|
0RisingStar0
| null | 11 | 0 |
diffusers
| 12 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
| false | true | true | 1,548 |
<p align="center"><img src="https://huggingface.co/0RisingStar0/HighRiseMixV1/resolve/main/00401-2269441947-(masterpiece%2C%20excellent%20quality%2C%20high%20quality%2C%20highres%20_%201.5)%2C%20(1girl%2C%20solo)%2C%20solo%20focus%2C%20sky%2C%20city%2C%20skyscrapers%2C%20pavement%2C%20tree.png">
<img src="https://huggingface.co/0RisingStar0/HighRiseMixV1/resolve/main/13.png"></p>
<b>V2 is out! : https://huggingface.co/0RisingStar0/HighRiseMixV2</b>
<center><b>HighRiseMixV1</b></center>
U-Net mixed model <b>specialized for city and skyscrapers background.</b>
<b>FP16 Pruned version</b>(No EMA).
(Quality change may occur in very small details on buildings' textures)
<b>Recommended prompts : </b>
(masterpiece, best quality, excellent quality), ((1girl, solo)), sky, city, (skyscrapers), trees, pavement, lens flare
EasyNegative, moss, phone, man, pedestrians, extras, border, outside border, white border
(EasyNegative is a negative embedding : https://huggingface.co/datasets/gsdf/EasyNegative)
<b>Recommended settings : </b>
Sampler : DPM++ 2M Karras OR DPM++ SDE Karras
Sampling steps : 25 ~ 30
Resolution : 512x768 OR 768x512
CFG Scale : 9
<b> Upscale is a must-do!! </b> Otherwise, you won't get great results.
Upscaler : Latent (nearest)
Hires steps : 0
Denoise : 0.6
Upscale 2x
<b> Mixed models : </b>
AbyssOrangeMix2_NSFW, AnythingV4.5, BasilMixFixed, CounterfeitV2.5, EerieOrangeMix2, PowercolorV2
(Thanks to everyone who made above models!)
This is my first mixed model being uploaded to public site, so feel free to give feedbacks as you wish, I'll try and work around with it.
|
mstaron/SingBERTa
|
mstaron
|
roberta
| 7 | 15 |
transformers
| 0 |
fill-mask
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,606 |
This model is a RoBERTa model trained on a programming language code - WolfSSL + examples of Singletons diffused with the Linux Kernel code. The model is pre-trained to understand the concep of a singleton in the code
The programming language is C/C++, but the actual inference can also use other languages.
Using the model to unmask can be done in the following way
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='mstaron/SingBERTa')
unmasker("Hello I'm a <mask> model.")
```
To obtain the embeddings for downstream task can be done in the following way:
```python
# import the model via the huggingface library
from transformers import AutoTokenizer, AutoModelForMaskedLM
# load the tokenizer and the model for the pretrained SingBERTa
tokenizer = AutoTokenizer.from_pretrained('mstaron/SingBERTa')
# load the model
model = AutoModelForMaskedLM.from_pretrained("mstaron/SingBERTa")
# import the feature extraction pipeline
from transformers import pipeline
# create the pipeline, which will extract the embedding vectors
# the models are already pre-defined, so we do not need to train anything here
features = pipeline(
"feature-extraction",
model=model,
tokenizer=tokenizer,
return_tensor = False
)
# extract the features == embeddings
lstFeatures = features('Class SingletonX1')
# print the first token's embedding [CLS]
# which is also a good approximation of the whole sentence embedding
# the same as using np.mean(lstFeatures[0], axis=0)
lstFeatures[0][0]
```
In order to use the model, we need to train it on the downstream task.
|
cedwin/log_c_pt
|
cedwin
|
mpnet
| 13 | 27 |
sentence-transformers
| 0 |
sentence-similarity
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
| false | true | true | 3,569 |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2053 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 4106,
"warmup_steps": 411,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
bigcode/santacoder-fast-inference
|
bigcode
|
gpt_bigcode
| 4 | 34 |
transformers
| 0 |
text-generation
| true | false | false |
openrail
|
['code']
|
['bigcode/the-stack']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| true | true | true | 3,394 |
# SantaCoder

Play with the model on the [SantaCoder Space Demo](https://huggingface.co/spaces/bigcode/santacoder-demo).
# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Limitations](#limitations)
4. [Training](#training)
5. [License](#license)
6. [Citation](#citation)
# Model Summary
This is the Megatron-version of [SantaCoder](https://huggingface.co/bigcode/santacoder).
We refer the reader to the [SantaCoder model page](https://huggingface.co/bigcode/santacoder) for full documentation about this model
- **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Project Website:** [bigcode-project.org](www.bigcode-project.org)
- **Paper:** [🎅SantaCoder: Don't reach for the stars!🌟](https://t.co/YV3pzUbYOr)
- **Point of Contact:** [[email protected]](mailto:[email protected])
- **Languages:** Python, Java, and JavaScript
There are two versions (branches) of the model:
* `main`: Uses the `gpt_bigcode` model. [Requires the bigcode fork of transformers](https://github.com/bigcode-project/transformers).
* `main_custom`: Packaged with its modeling code. Requires `transformers>=4.27`.
Alternatively, it can run on older versions by setting the configuration parameter `activation_function = "gelu_pytorch_tanh"`.
# Use
## Intended use
The model was trained on GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well.
You should phrase commands like they occur in source code such as comments (e.g. `# the following function computes the sqrt`) or write a function signature and docstring and let the model complete the function body.
### Attribution & Other Requirements
The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/santacoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
# Limitations
The model has been trained on source code in Python, Java, and JavaScript. The predominant language in source is English although other languages are also present. As such the model is capable to generate code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits.
# Training
## Model
- **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
- **Pretraining steps:** 600K
- **Pretraining tokens:** 236 billion
- **Precision:** float16
## Hardware
- **GPUs:** 96 Tesla V100
- **Training time:** 6.2 days
- **Total FLOPS:** 2.1 x 10e21
## Software
- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# License
The model is licenses under the CodeML Open RAIL-M v0.1 license. You can find the full license [here](https://huggingface.co/spaces/bigcode/license).
|
lukee/a2c-PandaReachDense-v2
|
lukee
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PeterBanning71/t5-small-finetuned-xsum
|
PeterBanning71
|
t5
| 14 | 12 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null |
['xsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 1,489 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4226
- Rouge1: 29.0135
- Rouge2: 8.2985
- Rougel: 22.9598
- Rougelsum: 22.954
- Gen Len: 18.8244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.6843 | 1.0 | 12753 | 2.4482 | 28.6347 | 7.9907 | 22.5689 | 22.567 | 18.8236 |
| 2.6439 | 2.0 | 25506 | 2.4226 | 29.0135 | 8.2985 | 22.9598 | 22.954 | 18.8244 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
vaibhav9/mini5-a
|
vaibhav9
|
bert
| 14 | 8 |
transformers
| 0 |
question-answering
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,170 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mini5-a
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 52 | 1.5947 |
| No log | 2.0 | 104 | 1.5901 |
| No log | 3.0 | 156 | 1.5849 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
lnros/poca-SoccerTwos
|
lnros
| null | 21 | 369 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 839 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: lnros/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
qgallouedec/ppo-CarRacing-v0
|
qgallouedec
| null | 16 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CarRacing-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,589 |
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env CarRacing-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo --env CarRacing-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env CarRacing-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo --env CarRacing-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env CarRacing-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env CarRacing-v0 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('clip_range', 0.2),
('ent_coef', 0.0),
('env_wrapper',
[{'rl_zoo3.wrappers.FrameSkip': {'skip': 2}},
{'gym.wrappers.resize_observation.ResizeObservation': {'shape': 64}},
{'gym.wrappers.gray_scale_observation.GrayScaleObservation': {'keep_dim': True}}]),
('frame_stack', 2),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 'lin_1e-4'),
('max_grad_norm', 0.5),
('n_envs', 8),
('n_epochs', 10),
('n_steps', 512),
('n_timesteps', 4000000.0),
('normalize', "{'norm_obs': False, 'norm_reward': True}"),
('policy', 'CnnPolicy'),
('policy_kwargs',
'dict(log_std_init=-2, ortho_init=False, activation_fn=nn.GELU, '
'net_arch=dict(pi=[256], vf=[256]), )'),
('sde_sample_freq', 4),
('use_sde', True),
('vf_coef', 0.5),
('normalize_kwargs', {'norm_obs': False, 'norm_reward': False})])
```
|
sb3/ppo-CarRacing-v0
|
sb3
| null | 16 | 1 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CarRacing-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,565 |
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env CarRacing-v0 -orga sb3 -f logs/
python -m rl_zoo3.enjoy --algo ppo --env CarRacing-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env CarRacing-v0 -orga sb3 -f logs/
python -m rl_zoo3.enjoy --algo ppo --env CarRacing-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env CarRacing-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env CarRacing-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('clip_range', 0.2),
('ent_coef', 0.0),
('env_wrapper',
[{'rl_zoo3.wrappers.FrameSkip': {'skip': 2}},
{'gym.wrappers.resize_observation.ResizeObservation': {'shape': 64}},
{'gym.wrappers.gray_scale_observation.GrayScaleObservation': {'keep_dim': True}}]),
('frame_stack', 2),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 'lin_1e-4'),
('max_grad_norm', 0.5),
('n_envs', 8),
('n_epochs', 10),
('n_steps', 512),
('n_timesteps', 4000000.0),
('normalize', "{'norm_obs': False, 'norm_reward': True}"),
('policy', 'CnnPolicy'),
('policy_kwargs',
'dict(log_std_init=-2, ortho_init=False, activation_fn=nn.GELU, '
'net_arch=dict(pi=[256], vf=[256]), )'),
('sde_sample_freq', 4),
('use_sde', True),
('vf_coef', 0.5),
('normalize_kwargs', {'norm_obs': False, 'norm_reward': False})])
```
|
mutisya/wav2vec2-300m-swa-r22-1K-lambda-fine-v1
|
mutisya
|
wav2vec2
| 28 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false | null | null |
['common_voice_9_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 12,270 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-300m-swa-r22-1K-lambda-fine-v1
This model is a fine-tuned version of [mutisya/wav2vec2-300m-swa-r22-1K-lambda-pre-v1](https://huggingface.co/mutisya/wav2vec2-300m-swa-r22-1K-lambda-pre-v1) on the common_voice_9_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2218
- Wer: 0.2379
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.8849 | 0.06 | 400 | 3.1614 | 1.0 |
| 2.2947 | 0.11 | 800 | 1.2490 | 0.9366 |
| 0.8926 | 0.17 | 1200 | 0.7215 | 0.6875 |
| 0.6727 | 0.22 | 1600 | 0.5753 | 0.5788 |
| 0.6056 | 0.28 | 2000 | 0.5337 | 0.5421 |
| 0.5423 | 0.33 | 2400 | 0.4949 | 0.5352 |
| 0.5089 | 0.39 | 2800 | 0.4556 | 0.4857 |
| 0.5041 | 0.44 | 3200 | 0.4575 | 0.4864 |
| 0.4719 | 0.5 | 3600 | 0.4391 | 0.4503 |
| 0.4613 | 0.55 | 4000 | 0.4145 | 0.4609 |
| 0.4468 | 0.61 | 4400 | 0.3956 | 0.4461 |
| 0.4331 | 0.67 | 4800 | 0.5997 | 0.4302 |
| 0.4207 | 0.72 | 5200 | 0.5058 | 0.4434 |
| 0.4015 | 0.78 | 5600 | 0.3794 | 0.4228 |
| 0.4117 | 0.83 | 6000 | 0.3523 | 0.4077 |
| 0.3943 | 0.89 | 6400 | 0.3392 | 0.4030 |
| 0.3894 | 0.94 | 6800 | 0.3377 | 0.3983 |
| 0.3888 | 1.0 | 7200 | 0.3425 | 0.3899 |
| 0.3515 | 1.05 | 7600 | 0.3666 | 0.3872 |
| 0.3437 | 1.11 | 8000 | 0.3806 | 0.3760 |
| 0.3446 | 1.16 | 8400 | 0.3440 | 0.3732 |
| 0.3422 | 1.22 | 8800 | 1.4329 | 0.3821 |
| 0.3511 | 1.27 | 9200 | 0.3203 | 0.3672 |
| 0.3458 | 1.33 | 9600 | 0.3142 | 0.3723 |
| 0.3347 | 1.39 | 10000 | 0.3037 | 0.3636 |
| 0.3459 | 1.44 | 10400 | 0.3210 | 0.3669 |
| 0.3271 | 1.5 | 10800 | 0.3145 | 0.3593 |
| 0.3343 | 1.55 | 11200 | 0.3055 | 0.3531 |
| 0.3293 | 1.61 | 11600 | 0.3111 | 0.3646 |
| 0.3163 | 1.66 | 12000 | 0.3305 | 0.3617 |
| 0.319 | 1.72 | 12400 | 0.3259 | 0.3522 |
| 0.3294 | 1.77 | 12800 | 0.3156 | 0.3513 |
| 0.3155 | 1.83 | 13200 | 0.2889 | 0.3400 |
| 0.3128 | 1.88 | 13600 | 0.3019 | 0.3453 |
| 0.3179 | 1.94 | 14000 | 0.3019 | 0.3489 |
| 0.305 | 2.0 | 14400 | 0.2957 | 0.3434 |
| 0.2627 | 2.05 | 14800 | 0.3254 | 0.3501 |
| 0.2676 | 2.11 | 15200 | 0.2905 | 0.3351 |
| 0.2687 | 2.16 | 15600 | 0.2966 | 0.3414 |
| 0.2711 | 2.22 | 16000 | 0.3002 | 0.3451 |
| 0.2797 | 2.27 | 16400 | 0.2849 | 0.3407 |
| 0.2859 | 2.33 | 16800 | 1.0602 | 0.3286 |
| 0.2756 | 2.38 | 17200 | 0.2934 | 0.3383 |
| 0.2881 | 2.44 | 17600 | 0.4832 | 0.3424 |
| 0.2683 | 2.49 | 18000 | 0.3135 | 0.3292 |
| 0.2694 | 2.55 | 18400 | 0.2816 | 0.3339 |
| 0.2644 | 2.6 | 18800 | 0.2860 | 0.3225 |
| 0.2689 | 2.66 | 19200 | 0.2796 | 0.3288 |
| 0.2559 | 2.72 | 19600 | 0.2795 | 0.3298 |
| 0.2741 | 2.77 | 20000 | 0.2707 | 0.3249 |
| 0.2619 | 2.83 | 20400 | 0.2639 | 0.3178 |
| 0.2633 | 2.88 | 20800 | 0.2819 | 0.3196 |
| 0.2594 | 2.94 | 21200 | 0.2853 | 0.3232 |
| 0.2529 | 2.99 | 21600 | 0.2548 | 0.3134 |
| 0.2404 | 3.05 | 22000 | 0.2959 | 0.3101 |
| 0.2329 | 3.1 | 22400 | 0.2627 | 0.3150 |
| 0.2289 | 3.16 | 22800 | 0.2679 | 0.3138 |
| 0.2224 | 3.21 | 23200 | 0.2628 | 0.3117 |
| 0.2218 | 3.27 | 23600 | 0.2699 | 0.3125 |
| 0.2175 | 3.33 | 24000 | 0.3142 | 0.3083 |
| 0.2233 | 3.38 | 24400 | 0.2634 | 0.3143 |
| 0.2199 | 3.44 | 24800 | 0.2498 | 0.3083 |
| 0.2097 | 3.49 | 25200 | 0.2616 | 0.3113 |
| 0.2281 | 3.55 | 25600 | 0.2605 | 0.3112 |
| 0.2222 | 3.6 | 26000 | 0.2573 | 0.3128 |
| 0.2313 | 3.66 | 26400 | 0.2600 | 0.3046 |
| 0.231 | 3.71 | 26800 | 0.2559 | 0.3105 |
| 0.224 | 3.77 | 27200 | 0.2542 | 0.3047 |
| 0.2214 | 3.82 | 27600 | 0.2642 | 0.3026 |
| 0.2172 | 3.88 | 28000 | 0.2486 | 0.2947 |
| 0.2159 | 3.93 | 28400 | 0.2543 | 0.3008 |
| 0.2231 | 3.99 | 28800 | 0.2688 | 0.3019 |
| 0.2036 | 4.05 | 29200 | 0.2658 | 0.3054 |
| 0.1994 | 4.1 | 29600 | 0.3157 | 0.2985 |
| 0.1908 | 4.16 | 30000 | 0.2503 | 0.2944 |
| 0.1802 | 4.21 | 30400 | 0.2479 | 0.3017 |
| 0.1957 | 4.27 | 30800 | 0.3001 | 0.2938 |
| 0.1919 | 4.32 | 31200 | 0.2544 | 0.2954 |
| 0.1923 | 4.38 | 31600 | 0.2653 | 0.2961 |
| 0.1908 | 4.43 | 32000 | 0.2738 | 0.3008 |
| 0.1929 | 4.49 | 32400 | 0.2583 | 0.2941 |
| 0.1843 | 4.54 | 32800 | 0.2459 | 0.2882 |
| 0.1942 | 4.6 | 33200 | 0.2830 | 0.2941 |
| 0.1921 | 4.66 | 33600 | 0.2751 | 0.2939 |
| 0.1914 | 4.71 | 34000 | 0.2554 | 0.2898 |
| 0.1902 | 4.77 | 34400 | 0.2896 | 0.2932 |
| 0.1925 | 4.82 | 34800 | 0.2418 | 0.2868 |
| 0.1757 | 4.88 | 35200 | 0.2369 | 0.2844 |
| 0.1825 | 4.93 | 35600 | 0.2769 | 0.2920 |
| 0.1844 | 4.99 | 36000 | 0.2451 | 0.2858 |
| 0.1645 | 5.04 | 36400 | 0.2514 | 0.2807 |
| 0.1562 | 5.1 | 36800 | 0.2551 | 0.2805 |
| 0.1634 | 5.15 | 37200 | 0.2666 | 0.2832 |
| 0.1705 | 5.21 | 37600 | 0.2355 | 0.2830 |
| 0.1672 | 5.26 | 38000 | 0.3299 | 0.2818 |
| 0.1638 | 5.32 | 38400 | 0.2663 | 0.2833 |
| 0.1559 | 5.38 | 38800 | 0.3040 | 0.2844 |
| 0.1599 | 5.43 | 39200 | 0.3044 | 0.2759 |
| 0.1629 | 5.49 | 39600 | 0.2440 | 0.2772 |
| 0.1651 | 5.54 | 40000 | 0.2493 | 0.2807 |
| 0.1589 | 5.6 | 40400 | 0.2447 | 0.2803 |
| 0.1611 | 5.65 | 40800 | 0.2503 | 0.2814 |
| 0.1595 | 5.71 | 41200 | 0.2447 | 0.2739 |
| 0.1614 | 5.76 | 41600 | 0.2316 | 0.2744 |
| 0.1523 | 5.82 | 42000 | 0.2259 | 0.2670 |
| 0.1544 | 5.87 | 42400 | 0.2499 | 0.2742 |
| 0.1579 | 5.93 | 42800 | 0.2360 | 0.2746 |
| 0.1563 | 5.99 | 43200 | 0.2353 | 0.2697 |
| 0.1495 | 6.04 | 43600 | 0.2480 | 0.2753 |
| 0.1374 | 6.1 | 44000 | 0.2342 | 0.2679 |
| 0.1363 | 6.15 | 44400 | 0.2396 | 0.2687 |
| 0.1368 | 6.21 | 44800 | 0.2544 | 0.2717 |
| 0.1378 | 6.26 | 45200 | 0.2342 | 0.2686 |
| 0.1326 | 6.32 | 45600 | 0.2393 | 0.2695 |
| 0.1383 | 6.37 | 46000 | 0.2322 | 0.2681 |
| 0.1396 | 6.43 | 46400 | 0.2890 | 0.2694 |
| 0.1347 | 6.48 | 46800 | 0.2377 | 0.2644 |
| 0.1314 | 6.54 | 47200 | 0.2277 | 0.2642 |
| 0.1289 | 6.59 | 47600 | 0.2316 | 0.2667 |
| 0.1343 | 6.65 | 48000 | 0.3072 | 0.2663 |
| 0.1301 | 6.71 | 48400 | 0.2744 | 0.2658 |
| 0.1312 | 6.76 | 48800 | 0.2836 | 0.2687 |
| 0.1337 | 6.82 | 49200 | 0.2402 | 0.2632 |
| 0.1353 | 6.87 | 49600 | 0.2439 | 0.2637 |
| 0.1351 | 6.93 | 50000 | 0.2439 | 0.2619 |
| 0.131 | 6.98 | 50400 | 0.2154 | 0.2609 |
| 0.1229 | 7.04 | 50800 | 0.2383 | 0.2606 |
| 0.1164 | 7.09 | 51200 | 0.2296 | 0.2562 |
| 0.1089 | 7.15 | 51600 | 0.2280 | 0.2614 |
| 0.1103 | 7.2 | 52000 | 0.2241 | 0.2576 |
| 0.115 | 7.26 | 52400 | 0.2306 | 0.2605 |
| 0.1168 | 7.32 | 52800 | 0.2200 | 0.2574 |
| 0.115 | 7.37 | 53200 | 0.2235 | 0.2564 |
| 0.1102 | 7.43 | 53600 | 0.2186 | 0.2593 |
| 0.1076 | 7.48 | 54000 | 0.2587 | 0.2573 |
| 0.1112 | 7.54 | 54400 | 0.2318 | 0.2566 |
| 0.114 | 7.59 | 54800 | 0.2253 | 0.2531 |
| 0.1056 | 7.65 | 55200 | 0.2240 | 0.2541 |
| 0.1097 | 7.7 | 55600 | 0.2342 | 0.2519 |
| 0.1089 | 7.76 | 56000 | 0.2173 | 0.2525 |
| 0.1143 | 7.81 | 56400 | 0.3170 | 0.2541 |
| 0.1116 | 7.87 | 56800 | 0.3029 | 0.2537 |
| 0.1088 | 7.92 | 57200 | 0.2835 | 0.2537 |
| 0.1071 | 7.98 | 57600 | 0.2136 | 0.2484 |
| 0.1031 | 8.04 | 58000 | 0.2211 | 0.2496 |
| 0.0952 | 8.09 | 58400 | 0.2259 | 0.2524 |
| 0.097 | 8.15 | 58800 | 0.2258 | 0.2479 |
| 0.0977 | 8.2 | 59200 | 0.2224 | 0.2489 |
| 0.0965 | 8.26 | 59600 | 0.2207 | 0.2484 |
| 0.0904 | 8.31 | 60000 | 0.2179 | 0.2473 |
| 0.1005 | 8.37 | 60400 | 0.2254 | 0.2471 |
| 0.0917 | 8.42 | 60800 | 0.2186 | 0.2480 |
| 0.0937 | 8.48 | 61200 | 0.2149 | 0.2452 |
| 0.0957 | 8.53 | 61600 | 0.2254 | 0.2481 |
| 0.0934 | 8.59 | 62000 | 0.2215 | 0.2473 |
| 0.1034 | 8.65 | 62400 | 0.2204 | 0.2472 |
| 0.0907 | 8.7 | 62800 | 0.2288 | 0.2447 |
| 0.0893 | 8.76 | 63200 | 0.2292 | 0.2446 |
| 0.0905 | 8.81 | 63600 | 0.2207 | 0.2454 |
| 0.0874 | 8.87 | 64000 | 0.2261 | 0.2448 |
| 0.0915 | 8.92 | 64400 | 0.2280 | 0.2430 |
| 0.0912 | 8.98 | 64800 | 0.2278 | 0.2420 |
| 0.0906 | 9.03 | 65200 | 0.2290 | 0.2417 |
| 0.0821 | 9.09 | 65600 | 0.2223 | 0.2409 |
| 0.0848 | 9.14 | 66000 | 0.2241 | 0.2406 |
| 0.0785 | 9.2 | 66400 | 0.2299 | 0.2400 |
| 0.0761 | 9.25 | 66800 | 0.2289 | 0.2408 |
| 0.082 | 9.31 | 67200 | 0.2297 | 0.2409 |
| 0.0755 | 9.37 | 67600 | 0.2300 | 0.2398 |
| 0.0771 | 9.42 | 68000 | 0.2275 | 0.2391 |
| 0.078 | 9.48 | 68400 | 0.2236 | 0.2388 |
| 0.0735 | 9.53 | 68800 | 0.2219 | 0.2382 |
| 0.0758 | 9.59 | 69200 | 0.2215 | 0.2395 |
| 0.0793 | 9.64 | 69600 | 0.2217 | 0.2383 |
| 0.0712 | 9.7 | 70000 | 0.2224 | 0.2383 |
| 0.0762 | 9.75 | 70400 | 0.2196 | 0.2374 |
| 0.0724 | 9.81 | 70800 | 0.2208 | 0.2376 |
| 0.0783 | 9.86 | 71200 | 0.2214 | 0.2378 |
| 0.0816 | 9.92 | 71600 | 0.2213 | 0.2378 |
| 0.0702 | 9.98 | 72000 | 0.2218 | 0.2378 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Hatman/ddpm-celebahq-finetuned-few-shot-universe
|
Hatman
| null | 6 | 0 |
diffusers
| 0 |
unconditional-image-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class']
| false | true | true | 408 |
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
A model from google/ddpm-celebahq-256 finetuned using the huggan/few-shot-universe dataset
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Hatman/ddpm-celebahq-finetuned-few-shot-universe')
image = pipeline().images[0]
image
```
|
austinmw/q-FrozenLake-v1-4x4-noSlippery
|
austinmw
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 397 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="austinmw/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
michalcisek5/q-Taxi-v3
|
michalcisek5
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 368 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="michalcisek5/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
austinmw/q-Taxi-v3
|
austinmw
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 364 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="austinmw/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
davanstrien/autotrain-encyclopaedia-illustrations-blog-post-3327992159
|
davanstrien
|
convnext
| 5 | 2 |
transformers
| 0 |
image-classification
| true | false | false | null | null |
['davanstrien/autotrain-data-encyclopaedia-illustrations-blog-post']
|
{'emissions': 4.608389135708385}
| 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['autotrain', 'vision', 'image-classification']
| false | true | true | 245 |
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 3327992159
- CO2 Emissions (in grams): 4.6084
## Validation Metrics
- Loss: 0.040
- Accuracy: 0.992
- Precision: 0.994
- Recall: 0.998
- AUC: 0.993
- F1: 0.996
|
hectorjelly/Ledbest_FC
|
hectorjelly
| null | 20 | 369 |
ml-agents
| 1 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 840 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: hectorjelly/Ledbest_FC
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CoreyMorris/poca-SoccerTwos-be-the-goldfish
|
CoreyMorris
| null | 20 | 372 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 861 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: CoreyMorris/poca-SoccerTwos-be-the-goldfish
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Hamid-reza/mt5-small-finetuned-digikala-titleGen
|
Hamid-reza
|
mt5
| 18 | 6 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 1,918 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-digikala-titleGen
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8801
- Rouge1: 70.3489
- Rouge2: 43.245
- Rougel: 34.6608
- Rougelsum: 34.6608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 7.5555 | 1.0 | 847 | 3.2594 | 45.6729 | 19.6446 | 31.5974 | 31.5974 |
| 4.1386 | 2.0 | 1694 | 3.0347 | 58.3021 | 32.8172 | 33.9012 | 33.9012 |
| 3.7449 | 3.0 | 2541 | 2.9665 | 66.731 | 40.8991 | 34.2203 | 34.2203 |
| 3.5575 | 4.0 | 3388 | 2.9102 | 65.598 | 39.4081 | 34.5116 | 34.5116 |
| 3.4062 | 5.0 | 4235 | 2.8944 | 69.6081 | 42.8707 | 34.6622 | 34.6622 |
| 3.3408 | 6.0 | 5082 | 2.8888 | 70.2123 | 42.8639 | 34.5669 | 34.5669 |
| 3.3025 | 7.0 | 5929 | 2.8801 | 70.3489 | 43.245 | 34.6608 | 34.6608 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
dbaibak/poca-SoccerTwos
|
dbaibak
| null | 20 | 371 |
ml-agents
| 1 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 841 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: dbaibak/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
austinmw/q-FrozenLake-v1-8x8-slippery
|
austinmw
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-8x8', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 395 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="austinmw/q-FrozenLake-v1-8x8-slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
victorivus/ppo-LunarLander-v2
|
victorivus
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
stelladk/Reinforce-CartPole-v1
|
stelladk
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
xiaozhangMJXXZ/schoolanime
|
xiaozhangMJXXZ
| null | 2 | 0 | null | 0 | null | false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 2,133 |
https://civitai.com/models/7189/school-anime
For me, this model is my favorite, and of course I hope you will like it
78 versions later, the current model is good at creating good light and shadow effects, cool background details, and is compatible with many special tags.
(If you want to get a cool image, I recommend you refer to the example diagram for tag writing, this model has a very high upper limit and a very low lower limit)
On second thought, I'd better list directly how to use it, after all, there is an old saying in China:”授人以鱼不如授人以渔。“(It is better to teach a man how to fish than to give him fish)
(((calligraphy_brush, limited palette))), (extremely detailed CG unity 8k wallpaper, masterpiece, best quality, ultra-detailed, best shadow), high resolution, (detailed background),
((加入你喜欢的天气, 或场景Add your favorite weather, or scene))
(beautiful detailed face, beautiful detailed eyes), (perfect_hands, perfect_body, perfect_anatomy), (arm + hand + 1thumb + 4finger),
High contrast, (best illumination, an extremely delicate and beautiful), ((cinematic light)), hyper detail, (illustration), dramatic light, intricate details, dynamic angle,
1 girl, (((solo))), ((你喜欢的发型以及头发的颜色 可以用“+”来链接它们Your favorite hairstyle and hair color can be linked with "+")),
((加入你想要的人物细节,就比如在示例图里的那样Add the character details you want, as in the example diagram))
Everything can be changed to your liking. Make pictures that you like! If possible, I really hope you can share in the comments section!
Of course, nsfw content is also suitable, and I recommend that you use it with some erotic lora models
I've generated a lot of comparison charts on CFG for you to refer to, and my personal favourite is in the 3.5-11.5 range.
It works well in close-up or distant faces.
I recommend this for tag writing(it's in Chinese)
元素同典:确实不完全科学的魔导书 (qq.com)
Of course, you can use this even if you are not familiar with Chinese
元素法典 第二卷——Novel AI 元素魔法全收录 (qq.com)
if you use this model to generate pictures that you like, I really, really, really, really hope you share them in the comments,
I'll definitely give you a thumbs up on !!!!
|
foxanthis/data-kds
|
foxanthis
|
gpt_neo
| 13 | 3 |
transformers
| 0 |
text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 929 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data-kds
This model is a fine-tuned version of [flax-community/gpt-neo-125M-code-clippy](https://huggingface.co/flax-community/gpt-neo-125M-code-clippy) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
deprem-ml/deprem-ner
|
deprem-ml
|
bert
| 13 | 197 |
transformers
| 23 |
token-classification
| true | false | false |
apache-2.0
|
['tr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,453 |
## deprem-ner
Bu model depremde enkaz altında kalan kişilerin bildirimlerinden sokak, il, ilçe gibi bilgileri çekmeye çalıştık.
Örnek girdiler:
- "Lütfen yardım Akevler mahallesi Rüzgar sokak Tuncay apartmanı zemin kat Antakya akrabalarım göçük altında #hatay #Afad"
- "MARAȘA'ta arkadaşimizdan haber alamıyoruz ACIL yardım Penta Park konutları 1. Blok en üst kat 11. Kat \n\n@AFADBaskanlik #kahramanmaraş\nACİL"
```
from transformers import pipeline
ner_pipe = pipeline("token-classification","deprem-ml/deprem-ner")
predictions = ner_pipe(""Lütfen yardım Akevler mahallesi Rüzgar sokak Tuncay apartmanı zemin kat Antakya akrabalarım göçük altında #hatay #Afad"")
```
Verdiği çıktılar:
```
[
{
"entity_group": "mahalle",
"score": 0.8160411715507507,
"word": "Akevler mahallesi",
"start": 14,
"end": 31
},
{
"entity_group": "sokak",
"score": 0.940501868724823,
"word": "Rüzgar sokak",
"start": 32,
"end": 44
},
{
"entity_group": "Apartman/Site",
"score": 0.8081040978431702,
"word": "Tuncay apartmanı",
"start": 45,
"end": 61
},
{
"entity_group": "ilce",
"score": 0.854024350643158,
"word": "Antakya",
"start": 72,
"end": 79
}
]
```
### Değerlendirme
Bu modeli Hugging Face Hub'daki diğer modellerle karşılaştırdık, örnek 30 input'ta sonuçları [bu repository'de](https://huggingface.co/datasets/deprem-ml/butun_model_benchmarklari) bulabilirsiniz.
|
foxanthis/data-deepit-kds
|
foxanthis
|
gpt_neo
| 17 | 2 |
transformers
| 0 |
text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,287 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data-deepit-kds
This model is a fine-tuned version of [flax-community/gpt-neo-125M-code-clippy](https://huggingface.co/flax-community/gpt-neo-125M-code-clippy) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 0.5930 |
| No log | 2.0 | 6 | 0.3968 |
| No log | 3.0 | 9 | 0.3333 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Deisler/a2c-PandaReachDense-v2
|
Deisler
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Mandoryan/PPO-LunarLander-v2
|
Mandoryan
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
scribis/italian-literature-model-mini
|
scribis
|
gpt2
| 9 | 5 |
transformers
| 0 |
text-generation
| false | true | false |
mit
|
['it']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback', 'text_generator']
| true | true | true | 1,583 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# italian-literature-model-mini
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.7067
- Validation Loss: 5.6842
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 15686, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.7065 | 5.6842 | 0 |
| 5.7065 | 5.6842 | 1 |
| 5.7067 | 5.6842 | 2 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
konrad-wesub/roberta-base-iphone-2
|
konrad-wesub
|
xlm-roberta
| 9 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,258 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-iphone-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1359
- Accuracy: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 27 | 0.2765 | 0.8333 |
| No log | 2.0 | 54 | 0.1359 | 0.9833 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
deetsml/dummy-model
|
deetsml
|
bart
| 14 | 20 |
transformers
| 0 |
text-classification
| true | false | false | null |
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classification', 'transformers']
| false | true | true | 3,573 |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 14756 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "accuracy",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 14756,
"warmup_steps": 1476,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BartModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
MultiversexPeeps/wave-concepts
|
MultiversexPeeps
| null | 21 | 9 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 863 |
### Wave Concepts Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Information on this model will be here: https://civitai.com/user/duskfallcrew
If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
wvebg1 (use that on your prompt)
|
Seyfelislem/wspr-sm-ar4
|
Seyfelislem
|
whisper
| 8 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,275 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wspr-sm-ar4
This model is a fine-tuned version of [Seyfelislem/wspr-sm-ar3](https://huggingface.co/Seyfelislem/wspr-sm-ar3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3664
- Wer: 58.7933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 150
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0972 | 1.0 | 150 | 0.3664 | 58.7933 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.11.0
- Datasets 2.9.1.dev0
- Tokenizers 0.12.1
|
vivals/babanaltaf
|
vivals
| null | 19 | 5 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 418 |
### babanaltaf Dreambooth model trained by vivals with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
SfinOe/stable-diffusion-v1.5
|
SfinOe
| null | 22 | 3 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
| false | true | true | 13,362 |
# Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion).
### Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion)
### Original GitHub Repository
1. Download the weights
- [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) - 4.27GB, ema-only weight. uses less VRAM - suitable for inference
- [v1-5-pruned.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt) - 7.7GB, ema+non-ema weights. uses more VRAM - suitable for fine-tuning
2. Follow instructions [here](https://github.com/runwayml/stable-diffusion).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
Currently six Stable Diffusion checkpoints are provided, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
EgilKarlsen/ApacheGPTNEO
|
EgilKarlsen
|
gpt_neo
| 9 | 5 |
transformers
| 0 |
text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,246 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ApacheGPTNEO
This model is a fine-tuned version of [EgilKarlsen/GPTNEO](https://huggingface.co/EgilKarlsen/GPTNEO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.2787 | 1.0 | 18303 | 0.2792 |
| 0.24 | 2.0 | 36606 | 0.2524 |
| 0.2044 | 3.0 | 54909 | 0.2460 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mrm8488/xlm-v-base-finetuned-xglue-xnli
|
mrm8488
|
xlm-roberta
| 11 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null |
['xglue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'xnli']
| true | true | true | 2,704 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-V (base) fine-tuned on XNLI
This model is a fine-tuned version of [XLM-V (base)](https://huggingface.co/facebook/xlm-v-base) on the XNLI (XGLUE) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6511
- Accuracy: 0.7403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0994 | 0.08 | 1000 | 1.0966 | 0.3697 |
| 1.0221 | 0.16 | 2000 | 1.0765 | 0.4560 |
| 0.8437 | 0.24 | 3000 | 0.8472 | 0.6179 |
| 0.6997 | 0.33 | 4000 | 0.7650 | 0.6804 |
| 0.6304 | 0.41 | 5000 | 0.7227 | 0.7007 |
| 0.5972 | 0.49 | 6000 | 0.7430 | 0.6977 |
| 0.5886 | 0.57 | 7000 | 0.7365 | 0.7066 |
| 0.5585 | 0.65 | 8000 | 0.6819 | 0.7223 |
| 0.5464 | 0.73 | 9000 | 0.7222 | 0.7046 |
| 0.5289 | 0.81 | 10000 | 0.7290 | 0.7054 |
| 0.5298 | 0.9 | 11000 | 0.6824 | 0.7221 |
| 0.5241 | 0.98 | 12000 | 0.6650 | 0.7268 |
| 0.4806 | 1.06 | 13000 | 0.6861 | 0.7308 |
| 0.4715 | 1.14 | 14000 | 0.6619 | 0.7304 |
| 0.4645 | 1.22 | 15000 | 0.6656 | 0.7284 |
| 0.4443 | 1.3 | 16000 | 0.7026 | 0.7270 |
| 0.4582 | 1.39 | 17000 | 0.7055 | 0.7225 |
| 0.4456 | 1.47 | 18000 | 0.6592 | 0.7361 |
| 0.44 | 1.55 | 19000 | 0.6816 | 0.7329 |
| 0.4419 | 1.63 | 20000 | 0.6772 | 0.7357 |
| 0.4403 | 1.71 | 21000 | 0.6745 | 0.7319 |
| 0.4348 | 1.79 | 22000 | 0.6678 | 0.7338 |
| 0.4355 | 1.87 | 23000 | 0.6614 | 0.7365 |
| 0.4295 | 1.96 | 24000 | 0.6511 | 0.7403 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
bhpardo/clasificador-amazonproducts2
|
bhpardo
|
bert
| 10 | 7 |
transformers
| 0 |
text-classification
| true | false | false | null | null |
['amazon_reviews_multi']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['classification', 'generated_from_trainer']
| true | true | true | 1,389 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-amazonproducts2
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2356
- Accuracy: 0.5563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2045 | 1.0 | 658 | 1.0496 | 0.5845 |
| 0.9569 | 2.0 | 1316 | 1.0380 | 0.5704 |
| 0.7637 | 3.0 | 1974 | 1.2356 | 0.5563 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
summervent/speller-t5-909
|
summervent
|
t5
| 21 | 15 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,259 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speller-t5-909
This model is a fine-tuned version of [sberbank-ai/ruT5-large](https://huggingface.co/sberbank-ai/ruT5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0814
- Rouge1: 18.2203
- Rouge2: 5.9322
- Rougel: 17.7966
- Rougelsum: 18.2203
- Gen Len: 42.0424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 0.3022 | 0.1 | 1500 | 0.1563 | 18.2203 | 5.9322 | 17.7966 | 18.2203 | 43.4492 |
| 0.2274 | 0.2 | 3000 | 0.1311 | 18.2203 | 5.9322 | 17.7966 | 18.2203 | 42.3814 |
| 0.2001 | 0.31 | 4500 | 0.1128 | 18.2203 | 5.9322 | 17.7966 | 18.2203 | 41.9407 |
| 0.1757 | 0.41 | 6000 | 0.1063 | 18.2203 | 5.9322 | 17.7966 | 18.2203 | 42.2542 |
| 0.1612 | 0.51 | 7500 | 0.1002 | 17.9379 | 5.0847 | 17.5141 | 17.7966 | 42.339 |
| 0.1718 | 0.61 | 9000 | 0.0921 | 18.2203 | 5.9322 | 17.7966 | 18.2203 | 42.0508 |
| 0.1678 | 0.72 | 10500 | 0.0834 | 17.7966 | 5.0847 | 17.3729 | 17.7966 | 41.9831 |
| 0.1407 | 0.82 | 12000 | 0.0793 | 18.2203 | 5.9322 | 17.7966 | 18.2203 | 42.2119 |
| 0.1447 | 0.92 | 13500 | 0.0814 | 18.2203 | 5.9322 | 17.7966 | 18.2203 | 42.0424 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
kmposkid1/ppo-Huggy
|
kmposkid1
| null | 32 | 8 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 820 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: kmposkid1/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
petergoldstein/ppo-LunarLander-v2
|
petergoldstein
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MultiversexPeeps/art-of-wave
|
MultiversexPeeps
| null | 21 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 866 |
### Art of Wave Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Information on this model will be here: https://civitai.com/user/duskfallcrew
If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
wvert1 (use that on your prompt)
|
atatavana/rhenus_model
|
atatavana
|
layoutlmv3
| 20 | 35 |
transformers
| 0 |
token-classification
| true | false | false |
cc-by-nc-sa-4.0
| null |
['sroie']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,564 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rhenus_model
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the sroie dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1560
- Precision: 0.8057
- Recall: 0.8397
- F1: 0.8223
- Accuracy: 0.9734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.96 | 100 | 0.7818 | 0.0909 | 0.0042 | 0.0081 | 0.8569 |
| No log | 3.92 | 200 | 0.5681 | 0.2442 | 0.0886 | 0.1300 | 0.8708 |
| No log | 5.88 | 300 | 0.4568 | 0.2803 | 0.1857 | 0.2234 | 0.8913 |
| No log | 7.84 | 400 | 0.3759 | 0.5053 | 0.4051 | 0.4496 | 0.9196 |
| 0.5952 | 9.8 | 500 | 0.2987 | 0.6560 | 0.6034 | 0.6286 | 0.9456 |
| 0.5952 | 11.76 | 600 | 0.2585 | 0.6721 | 0.6920 | 0.6819 | 0.9456 |
| 0.5952 | 13.73 | 700 | 0.2016 | 0.7247 | 0.7553 | 0.7397 | 0.9595 |
| 0.5952 | 15.69 | 800 | 0.2053 | 0.704 | 0.7426 | 0.7228 | 0.9573 |
| 0.5952 | 17.65 | 900 | 0.1845 | 0.7782 | 0.7848 | 0.7815 | 0.9667 |
| 0.1097 | 19.61 | 1000 | 0.1917 | 0.75 | 0.7848 | 0.7670 | 0.9623 |
| 0.1097 | 21.57 | 1100 | 0.1897 | 0.8099 | 0.8270 | 0.8184 | 0.9695 |
| 0.1097 | 23.53 | 1200 | 0.1848 | 0.7901 | 0.8101 | 0.8 | 0.9684 |
| 0.1097 | 25.49 | 1300 | 0.1533 | 0.8016 | 0.8523 | 0.8262 | 0.9734 |
| 0.1097 | 27.45 | 1400 | 0.1534 | 0.8204 | 0.8481 | 0.8340 | 0.9750 |
| 0.0384 | 29.41 | 1500 | 0.1879 | 0.8024 | 0.8397 | 0.8206 | 0.9695 |
| 0.0384 | 31.37 | 1600 | 0.1550 | 0.816 | 0.8608 | 0.8378 | 0.9750 |
| 0.0384 | 33.33 | 1700 | 0.1598 | 0.8279 | 0.8523 | 0.8399 | 0.9756 |
| 0.0384 | 35.29 | 1800 | 0.1643 | 0.8148 | 0.8354 | 0.825 | 0.9728 |
| 0.0384 | 37.25 | 1900 | 0.1558 | 0.792 | 0.8354 | 0.8131 | 0.9728 |
| 0.02 | 39.22 | 2000 | 0.1699 | 0.7944 | 0.8312 | 0.8124 | 0.9717 |
| 0.02 | 41.18 | 2100 | 0.1558 | 0.8138 | 0.8481 | 0.8306 | 0.9750 |
| 0.02 | 43.14 | 2200 | 0.1566 | 0.8024 | 0.8397 | 0.8206 | 0.9728 |
| 0.02 | 45.1 | 2300 | 0.1617 | 0.8049 | 0.8354 | 0.8199 | 0.9734 |
| 0.02 | 47.06 | 2400 | 0.1571 | 0.8016 | 0.8354 | 0.8182 | 0.9723 |
| 0.014 | 49.02 | 2500 | 0.1560 | 0.8057 | 0.8397 | 0.8223 | 0.9734 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.2.2
- Tokenizers 0.13.2
|
eshwarprasadS/q-FrozenLake-v1-4x4-noSlippery
|
eshwarprasadS
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 402 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="eshwarprasadS/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
eshwarprasadS/taxi_qlearner
|
eshwarprasadS
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 373 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="eshwarprasadS/taxi_qlearner", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SRobbins/a2c-AntBulletEnv-v0
|
SRobbins
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DragonProgrammer/dqn-SpaceInvadersNoFrameskip-v4
|
DragonProgrammer
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,242 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DragonProgrammer -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DragonProgrammer -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga DragonProgrammer
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
gatardochi/q-FrozenLake-v1-4x4-noSlippery
|
gatardochi
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 399 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="gatardochi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gatardochi/q-Taxi-v3
|
gatardochi
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 366 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="gatardochi/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.