repo_id
stringlengths 4
122
| author
stringlengths 2
38
⌀ | model_type
stringlengths 2
33
⌀ | files_per_repo
int64 2
39k
| downloads_30d
int64 0
33.7M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.87k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
33
⌀ | languages
stringlengths 2
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringlengths 6
258
⌀ | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
46
| prs_closed
int64 0
34
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 2
classes | has_text
bool 1
class | text_length
int64 201
598k
| readme
stringlengths 0
598k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
jackshoemaker/bert-finetuned-squad
|
jackshoemaker
|
bert
| 16 | 12 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 954 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gatardochi/ppo-SnowballTarget
|
gatardochi
| null | 20 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 857 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: gatardochi/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
smilingface88/xlm-roberta-base-finetuned-panx-de-fr
|
smilingface88
|
xlm-roberta
| 10 | 2 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,321 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1629
- F1: 0.8584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2904 | 1.0 | 715 | 0.1823 | 0.8286 |
| 0.1446 | 2.0 | 1430 | 0.1626 | 0.8488 |
| 0.0941 | 3.0 | 2145 | 0.1629 | 0.8584 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
marcowong02/bert-finetuned-squad
|
marcowong02
|
bert
| 10 | 9 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 954 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
BridgeTower/bridgetower-large-itm-mlm-itc
|
BridgeTower
| null | 7 | 124 |
transformers
| 0 | null | true | false | false |
mit
|
['en']
|
['conceptual_captions', 'conceptual_12m', 'sbu_captions', 'visual_genome', 'mscoco_captions']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['bridgetower', 'gaudi']
| false | true | true | 5,176 |
# BridgeTower large-itm-mlm-itc model
The BridgeTower model was proposed in "BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning" by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan.
The model was pretrained on English language using masked language modeling (MLM) and image text matching (ITM)objectives. It was introduced in
[this paper](https://arxiv.org/pdf/2206.08657.pdf) and first released in
[this repository](https://github.com/microsoft/BridgeTower).
BridgeTower got accepted to [AAAI'23](https://aaai.org/Conferences/AAAI-23/).
## Model description
The abstract from the paper is the following:
Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets.
## Intended uses & limitations
### How to use
Here is how to use this model to perform image and text matching:
```python
from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi")
model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi")
# forward pass
scores = dict()
for text in texts:
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
outputs = model(**encoding)
scores[text] = outputs.logits[0,1].item()
```
Here is how to use this model to perform masked language modeling:
```python
from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000360943.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
text = "a <mask> looking out of the window"
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi")
model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
outputs = model(**encoding)
results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist())
print(results)
#.a cat looking out of the window.
```
## Training data
The BridgeTower model was pretrained on four public image-caption datasets:
- [Conceptual Captions (CC3M)](https://ai.google.com/research/ConceptualCaptions/)
- [Conceptual 12M (CC12M)](https://github.com/google-research-datasets/conceptual-12m)
- [SBU Captions](https://www.cs.rice.edu/~vo9/sbucaptions/)
- [MSCOCO Captions](https://arxiv.org/pdf/1504.00325.pdf)
- [Visual Genome](https://visualgenome.org/)
The total number of unique images in the combined data is around 14M.
## Training procedure
### Pretraining
The model was pre-trained for 10 epochs on an Intel AI supercomputing cluster using 512 Gaudis and 128 Xeons with a batch size of 2048.
The optimizer used was AdamW with a learning rate of 1e-7. No data augmentation was used except for center-crop. The image resolution in pre-training is set to 294 x 294.
## Evaluation results
Please refer to [Table 5](https://arxiv.org/pdf/2206.08657.pdf) for BridgeTower's performance on Image Retrieval and other downstream tasks.
### BibTeX entry and citation info
```bibtex
@article{xu2022bridge,
title={BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning},
author={Xu, Xiao and Wu, Chenfei and Rosenman, Shachar and Lal, Vasudev and Che, Wanxiang and Duan, Nan},
journal={arXiv preprint arXiv:2206.08657},
year={2022}
}
```
|
DunnBC22/vit-base-patch16-224-in21k-weather-images-classification
|
DunnBC22
|
vit
| 15 | 0 |
transformers
| 1 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,308 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-weather-images-classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2255
- Accuracy: 0.9340
- Weighted f1: 0.9341
- Micro f1: 0.9340
- Macro f1: 0.9372
- Weighted recall: 0.9340
- Micro recall: 0.9340
- Macro recall: 0.9354
- Weighted precision: 0.9347
- Micro precision: 0.9340
- Macro precision: 0.9398
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 2.4333 | 1.0 | 337 | 0.3374 | 0.9036 | 0.9028 | 0.9036 | 0.9080 | 0.9036 | 0.9036 | 0.9002 | 0.9088 | 0.9036 | 0.9234 |
| 0.4422 | 2.0 | 674 | 0.2504 | 0.9228 | 0.9226 | 0.9228 | 0.9285 | 0.9228 | 0.9228 | 0.9273 | 0.9248 | 0.9228 | 0.9318 |
| 0.1051 | 3.0 | 1011 | 0.2255 | 0.9340 | 0.9341 | 0.9340 | 0.9372 | 0.9340 | 0.9340 | 0.9354 | 0.9347 | 0.9340 | 0.9398 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.12.1
|
gatardochi/ppo-Pyramids
|
gatardochi
| null | 16 | 2 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
| false | true | true | 833 |
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: gatardochi/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
amlannayak/finetuning-sentiment-model-3000-samples
|
amlannayak
|
distilbert
| 10 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,049 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3251
- Accuracy: 0.8767
- F1: 0.8787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
lancechen/ppo-LunarLander-v2
|
lancechen
| null | 12 | 1 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
juye/_output
|
juye
| null | 38 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
| false | true | true | 356 |
# LoRA DreamBooth - juye/_output
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks woman using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




|
laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup
|
laion
| null | 12 | 1 |
open_clip
| 0 |
zero-shot-image-classification
| false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['zero-shot-image-classification', 'clip']
| false | true | true | 12,492 |
# Model card for CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
# Model Details
## Model Description
A series of CLIP [ConvNeXt-Large](https://arxiv.org/abs/2201.03545) (w/ extra text depth, vision MLP head) models trained on the LAION-2B (english) subset of [LAION-5B](https://arxiv.org/abs/2210.08402) using [OpenCLIP](https://github.com/mlfoundations/open_clip).
The models utilize:
* the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Large model (`convnext_large`) as the image tower
* a MLP (`fc - gelu - drop - fc`) head in vision tower instead of the single projection of other CLIP models
* a text tower with same width but 4 layers more depth than ViT-L / RN50x16 models (depth 16, embed dim 768).
This 320x320 resolution model is a soup (weight average) of 3 fine-tunes of [CLIP-convnext_large_d.laion2B-s26B-b102K-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) at a higher resolution. It is an average of 3 fine-tunes from the final checkpoint of the original 256x256 training run w/ an additional ~2-3B samples for each fine-tune and a lower learning rate. Each fine-tune was a different learning rate (1e-4, 6e-5, 5e-5), and diff # of samples (3.2B, 2B, 2.5B).
At 320x320, the ConvNext-Large-D is significantly more efficient than the L/14 model at 336x336 that OpenAI fine-tuned. L/14-336 model is 2.5x more GMAC, 2.8x more activations, and 1.22x more parameters.
| Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) |
| ----- | ------- | ---------- | ------------ | --------- |
| [convnext_large_d.laion2b_s26b_b102k-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1), D(0.1) | 75.9 |
| [convnext_large_d_320.laion2b_s29b_b131k-ft](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.6 |
| [convnext_large_d_320.laion2b_s29b_b131k-ft-soup](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.9 |
RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only, D = Dropout (prob) -- image tower head only
LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering.
Model training done by Ross Wightman on the [stability.ai](https://stability.ai/) cluster.
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.
# Training Details
## Training Data
This model was trained with one of (see table in intro):
* LAION-2B - A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/).
* LAION-Aesthetic - A 900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
## Training Procedure
All 320x320 model fine-tunes were trained with a global batch size of 131072 for 10-16 checkpoint intervals of 203.7M samples for a total of ~2-3B samples seen over fine-tune.
For 320x320 models, a slurm script w/ srun below was used on 64 8-GPU (A100 40GB) nodes (Stability).
```
/opt/slurm/sbin/srun --cpu_bind=v --accel-bind=gn python -m training.main \
--save-frequency 1 \
--name "convnext_large_320" \
--pretrained ""/runs/convnext_large_256/epoch_128.pt" \
--resume 'latest' \
--train-data="pipe:aws s3 cp s3://mybucket/path/{laion{00000..xxxxx}.tar -" \
--train-num-samples 203666042 \
--dataset-type webdataset \
--precision amp_bfloat16 \
--beta2 0.98 \
--warmup 2000 \
--batch-size=256 \
--epochs=12 \
--dataset-resampled \
--aug-cfg use_timm=True scale='(0.5, 1.0)' re_prob=0.4 \
--clip-grad-norm 5.0 \
--lr 5e-5 \
--workers=6 \
--model "convnext_large_d_320" \
--seed 0 \
--ddp-static-graph \
--local-loss \
--gather-with-grad \
--grad-checkpointing
```
# Evaluation
Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.
## Results
The models achieve between 75.9 and 76.9 top-1 zero-shot accuracy on ImageNet-1k.
Zero-shot curve of origina from-scratch 256x256 training:

An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb
# Acknowledgements
Acknowledging [stability.ai](https://stability.ai/) for compute used to train this model.
# Citation
**BibTeX:**
LAION-5B
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
OpenCLIP software
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
```
@InProceedings{pmlr-v162-wortsman22a,
title = {Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time},
author = {Wortsman, Mitchell and Ilharco, Gabriel and Gadre, Samir Ya and Roelofs, Rebecca and Gontijo-Lopes, Raphael and Morcos, Ari S and Namkoong, Hongseok and Farhadi, Ali and Carmon, Yair and Kornblith, Simon and Schmidt, Ludwig},
booktitle = {Proceedings of the 39th International Conference on Machine Learning},
pages = {23965--23998},
year = {2022},
editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan},
volume = {162},
series = {Proceedings of Machine Learning Research},
month = {17--23 Jul},
publisher = {PMLR},
pdf = {https://proceedings.mlr.press/v162/wortsman22a/wortsman22a.pdf},
url = {https://proceedings.mlr.press/v162/wortsman22a.html}
}
```
OpenAI CLIP paper
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@Article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
|
Patrickrpds/mriverdb
|
Patrickrpds
| null | 19 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 421 |
### mriverdb Dreambooth model trained by Patrickrpds with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft
|
laion
| null | 10 | 1 |
open_clip
| 0 |
zero-shot-image-classification
| false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['zero-shot-image-classification', 'clip']
| false | true | true | 11,371 |
# Model card for CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
# Model Details
## Model Description
A series of CLIP [ConvNeXt-Large](https://arxiv.org/abs/2201.03545) (w/ extra text depth, vision MLP head) models trained on the LAION-2B (english) subset of [LAION-5B](https://arxiv.org/abs/2210.08402) using [OpenCLIP](https://github.com/mlfoundations/open_clip).
The models utilize:
* the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Large model (`convnext_large`) as the image tower
* a MLP (`fc - gelu - drop - fc`) head in vision tower instead of the single projection of other CLIP models
* a text tower with same width but 4 layers more depth than ViT-L / RN50x16 models (depth 16, embed dim 768).
This 320x320 resolution model is a fine-tune of [CLIP-convnext_large_d.laion2B-s26B-b102K-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) at a higher resolution. It was fine-tune from the final checkpoint of the original 256x256 training run w/ an additional ~2.5B samples and a lower learning rate.
At 320x320, the ConvNext-Large-D is significantly more efficient than the L/14 model at 336x336 that OpenAI fine-tuned. L/14-336 model is 2.5x more GMAC, 2.8x more activations, and 1.22x more parameters.
| Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) |
| ----- | ------- | ---------- | ------------ | --------- |
| [convnext_large_d.laion2b_s26b_b102k-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1), D(0.1) | 75.9 |
| [convnext_large_d_320.laion2b_s29b_b131k-ft](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.6 |
| [convnext_large_d_320.laion2b_s29b_b131k-ft-soup](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.9 |
RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only, D = Dropout (prob) -- image tower head only
LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering.
Model training done by Ross Wightman on the [stability.ai](https://stability.ai/) cluster.
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.
# Training Details
## Training Data
This model was trained with one of (see table in intro):
* LAION-2B - A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/).
* LAION-Aesthetic - A 900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
## Training Procedure
All 320x320 model fine-tunes were trained with a global batch size of 131072 for 10-16 checkpoint intervals of 203.7M samples for a total of ~2-3B samples seen over fine-tune.
For 320x320 models, a slurm script w/ srun below was used on 64 8-GPU (A100 40GB) nodes (Stability).
```
/opt/slurm/sbin/srun --cpu_bind=v --accel-bind=gn python -m training.main \
--save-frequency 1 \
--name "convnext_large_320" \
--pretrained ""/runs/convnext_large_256/epoch_128.pt" \
--resume 'latest' \
--train-data="pipe:aws s3 cp s3://mybucket/path/{laion{00000..xxxxx}.tar -" \
--train-num-samples 203666042 \
--dataset-type webdataset \
--precision amp_bfloat16 \
--beta2 0.98 \
--warmup 2000 \
--batch-size=256 \
--epochs=12 \
--dataset-resampled \
--aug-cfg use_timm=True scale='(0.5, 1.0)' re_prob=0.4 \
--clip-grad-norm 5.0 \
--lr 5e-5 \
--workers=6 \
--model "convnext_large_d_320" \
--seed 0 \
--ddp-static-graph \
--local-loss \
--gather-with-grad \
--grad-checkpointing
```
# Evaluation
Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.
## Results
The models achieve between 75.9 and 76.9 top-1 zero-shot accuracy on ImageNet-1k.
Zero-shot curve of origina from-scratch 256x256 training:

An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb
# Acknowledgements
Acknowledging [stability.ai](https://stability.ai/) for compute used to train this model.
# Citation
**BibTeX:**
LAION-5B
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
OpenCLIP software
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
OpenAI CLIP paper
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@Article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
|
smilingface88/xlm-roberta-base-finetuned-panx-fr
|
smilingface88
|
xlm-roberta
| 10 | 0 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,320 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2676
- F1: 0.8449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5915 | 1.0 | 191 | 0.3285 | 0.7814 |
| 0.2651 | 2.0 | 382 | 0.2707 | 0.8314 |
| 0.174 | 3.0 | 573 | 0.2676 | 0.8449 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
nanashisan/Sherlock-Hound
|
nanashisan
| null | 7 | 0 | null | 0 | null | false | false | false | null |
['ja']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 782 |
Mrs.Hudson
- keyword:hudson
- Sample Prompt:best quality,furry female, hudson, 1girl, solo, apron, smile, door, dress, white apron, long sleeves, indoors, pink dress, looking at viewer, dog girl, closed mouth, short hair, tail, bangs, furry, maid
<lora:hudson-epoch08:1>
- Negative Prompt: (worst quality,low quality:1.4),bad anatomy,3d,(animal tail,dog tail:1.4)
- 
- 
Model
- hudson-epoch08.safetensors
- 推奨:適切な学習でサンプルはこのバージョンで出力
- hudson-epoch06.safetensors
- 学習不足:犬顔にならず人間顔が表示される事がある
- hudson-epoch10.safetensors
- 過学習:画質がやや劣化した
|
smilingface88/xlm-roberta-base-finetuned-panx-it
|
smilingface88
|
xlm-roberta
| 10 | 0 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,320 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2467
- F1: 0.8206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7897 | 1.0 | 70 | 0.3096 | 0.7519 |
| 0.2819 | 2.0 | 140 | 0.2603 | 0.8093 |
| 0.1818 | 3.0 | 210 | 0.2467 | 0.8206 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
antonellaavad/pablo-pictures
|
antonellaavad
| null | 71 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
| false | true | true | 504 |
# LoRA DreamBooth - pablo-pic
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "pablo" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: picture of pablo




|
smilingface88/xlm-roberta-base-finetuned-panx-en
|
smilingface88
|
xlm-roberta
| 10 | 0 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,320 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4028
- F1: 0.6869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1396 | 1.0 | 50 | 0.5670 | 0.5101 |
| 0.5289 | 2.0 | 100 | 0.4594 | 0.6358 |
| 0.3838 | 3.0 | 150 | 0.4028 | 0.6869 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
frncscp/Patacotron
|
frncscp
| null | 13 | 0 |
keras
| 0 |
image-classification
| false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['One-Class Image Classification']
| false | true | true | 3,743 |
# Patacotrón
Artificial Intelligence capable of patacón recognition
# Details
Here are hosted different models in cronological order, they were trained based on a Positive-Negative class dataset with custom architechtures and transfer learning.
## Description
This series of models are part of an investigation of "One-Class Image Classification with AI Algorithms", and are able to recognize patacones on images.
- **Developed by:** https://github.com/frncscp
- **Model type:** Convolutional Neural Network, DNN
- **License:** mit
- **Finetuned from model (the ones that were trained with transfer learning):** DenseNet, Xception
## Sources [optional]
- **Repository:** https://github.com/frncscp/ptctrn
- **Paper:** Still on investigation
- **Demo:** https://huggingface.co/spaces/frncscp/Patacotron
# Uses
The models that were taken from transfer learning now have custom weights, but all models can be used with Fine-Tuning for Image Classification tasks.
## Direct Use
Patacognition (Patacón Recognition)
# Bias, Risks, and Limitations
Early versions can have a strong bias to color, most of the models could problems if the texture is too similar, like with deep fried foods.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Resize the images to 224x224, with RGB channels
A clear, good definition photo will always be preferable
The least things appearing on the image, the better
A patacón is a prediction of 80% or more
## How to Get Started with Patacotrón
Import the required modules, and with this four lines you're good to go:
img = cv2.imread(image_dir)
resize = tf.image.resize(cv2.cvtColor(img, cv2.COLOR_BGR2RGB),(IMAGE_WIDTH, IMAGE_HEIGHT))
ptctrn = load_model(model_dir)
y_gorrito = ptctrn.predict(np.expand_dims(resize/255, 0))
## Training Data
A custom handmade dataset was used.
### Preprocessing
Images were resized to 224x224
### Speeds, Sizes, Times
It largely depends on the model, but epoch's speed could range from 15 to 30 minutes, and training from several hours to a day and a half.
# Evaluation
A novel formula of efficiency was made, which uses average prediction, score on an image dataset, and weights to each variable. It is as follows:
-For positive classes:
$$E = \frac{(S * {S}')+(P * {P}')}{{S}'+{P}'}$$
-For negative classes:
$$E = \frac{(S * {S}')+((1-P) * {P}')}{{S}'+{P}'}$$
Where:
S represents normalized score (+1 for each image correctly classified, -1 if viceversa) from 0 to 1
P represents average prediction
S´ and P´ are each variable biases
Each score and average prediction is extracted from a folder with all positive or all negative class images.
The results were taken from 4 folders (half positives, half negatives) with +15k files, a threshold of 80%, and a prediction bias of 1.2
### Testing Data
Custom handmade dataset.
### Metrics
Efficiency, Binary Cross Entropy loss, Accuracy, AUC, Binary Accuracy (with a threshold of 0.8 on later versions)
## Results
For efficiency:
'ptctrn_v1.1.h5': 0.4805495065119731
'ptctrn_v1.2.h5': 0.4238817456329115
'ptctrn_v1.3.h5': 0.5343622414829778
'ptctrn_v1.4.h5': 0.6059606705949329
'ptctrn_v1.5.h5': 0.5040440155920757
'ptctrn_v1.6.h5': 0.6889029338405537
'ptctrn_v1.7.h5': 0.7112169071407513
'ptctrn_v1.8.h5': 0.7181276230324835
'ptctrn_v1.9.h5': 0.6869660637054289
'ptctrn_v1.9.1.h5': 0.6810481039373982
'ptctrn_v1.10.h5': 0.6350583567300264
### Summary
Further training needs to be made, but some versions (1.5, 1.6, 1.7, 1.8) can be efficient against similar images, model ensembling is recommended.
### Hardware
Ryzen 5 3500U (Laptop CPU)
20GB 2400mhz RAM
|
smilingface88/xlm-roberta-base-finetuned-panx-all
|
smilingface88
|
xlm-roberta
| 10 | 0 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,319 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1712
- F1: 0.8537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3049 | 1.0 | 835 | 0.1862 | 0.8051 |
| 0.1618 | 2.0 | 1670 | 0.1778 | 0.8385 |
| 0.1063 | 3.0 | 2505 | 0.1712 | 0.8537 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
arun-shankar/GPT2-RLHF-covid
|
arun-shankar
|
gpt2
| 12 | 3 |
transformers
| 2 |
text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 760 |
## GPT2 fine-tuned with COVID-19 question and answer pairs using Reinforcement Learning with Human Feedback (RLHF) and Proximal Policy Optimization (PPO).
Uses PPO and TRL library to align the response based on BERTScore towards the expected response.
You can ask the model any question related to COVID-19 in this format:
**question: should i wear a mask at home?\nanswer:**
You can also add a CTRL token as a special prefix token in front of your prompt to align it to your preference.
For example, you either add a [good] or [bad] token as a prefix.
**[good]question: should i wear a mask at school?\nanswer:**
Good and bad here refers to the quality of the response. Being more aligned to the expected (ground truth) response is good and not is bad.
|
corbt/roberta-lora-2
|
corbt
|
roberta
| 8 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 8,586 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-lora-2
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5790
- Mse: 0.5790
- Mae: 0.5751
- R2: 0.5572
- Accuracy: 0.5465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:--------:|
| 0.9268 | 0.02 | 2500 | 0.7467 | 0.7467 | 0.6737 | 0.4290 | 0.4621 |
| 0.7651 | 0.05 | 5000 | 0.7631 | 0.7631 | 0.6773 | 0.4164 | 0.4582 |
| 0.7399 | 0.07 | 7500 | 0.9654 | 0.9654 | 0.7675 | 0.2616 | 0.4104 |
| 0.7249 | 0.1 | 10000 | 0.7259 | 0.7259 | 0.6579 | 0.4449 | 0.4763 |
| 0.7122 | 0.12 | 12500 | 0.7292 | 0.7292 | 0.6596 | 0.4423 | 0.4753 |
| 0.7035 | 0.15 | 15000 | 0.7039 | 0.7039 | 0.6425 | 0.4616 | 0.4889 |
| 0.6992 | 0.17 | 17500 | 0.8192 | 0.8192 | 0.7018 | 0.3735 | 0.4485 |
| 0.6885 | 0.2 | 20000 | 0.8312 | 0.8312 | 0.7040 | 0.3643 | 0.4480 |
| 0.6974 | 0.22 | 22500 | 0.6822 | 0.6822 | 0.6317 | 0.4782 | 0.4987 |
| 0.6933 | 0.25 | 25000 | 0.7079 | 0.7079 | 0.6426 | 0.4586 | 0.4936 |
| 0.6972 | 0.27 | 27500 | 0.7470 | 0.7470 | 0.6638 | 0.4287 | 0.4768 |
| 0.6838 | 0.29 | 30000 | 0.6918 | 0.6918 | 0.6362 | 0.4709 | 0.5009 |
| 0.6766 | 0.32 | 32500 | 0.6597 | 0.6597 | 0.6199 | 0.4955 | 0.5035 |
| 0.6746 | 0.34 | 35000 | 0.7049 | 0.7049 | 0.6431 | 0.4609 | 0.4897 |
| 0.6742 | 0.37 | 37500 | 0.6701 | 0.6701 | 0.6240 | 0.4875 | 0.5096 |
| 0.6772 | 0.39 | 40000 | 0.6616 | 0.6616 | 0.6176 | 0.4940 | 0.5120 |
| 0.6717 | 0.42 | 42500 | 0.6548 | 0.6548 | 0.6187 | 0.4992 | 0.5072 |
| 0.6849 | 0.44 | 45000 | 0.6486 | 0.6486 | 0.6157 | 0.5039 | 0.5087 |
| 0.6727 | 0.47 | 47500 | 0.6829 | 0.6829 | 0.6294 | 0.4777 | 0.5030 |
| 0.7081 | 0.49 | 50000 | 0.6777 | 0.6777 | 0.6299 | 0.4817 | 0.5037 |
| 0.6692 | 0.52 | 52500 | 0.6634 | 0.6634 | 0.6206 | 0.4927 | 0.5078 |
| 0.6676 | 0.54 | 55000 | 0.6760 | 0.6760 | 0.6261 | 0.4830 | 0.5068 |
| 0.6575 | 0.56 | 57500 | 0.6301 | 0.6301 | 0.6060 | 0.5181 | 0.5172 |
| 0.6661 | 0.59 | 60000 | 0.6626 | 0.6626 | 0.6168 | 0.4933 | 0.5153 |
| 0.653 | 0.61 | 62500 | 0.6516 | 0.6516 | 0.6176 | 0.5017 | 0.5106 |
| 0.6583 | 0.64 | 65000 | 0.7014 | 0.7014 | 0.6400 | 0.4636 | 0.4951 |
| 0.6617 | 0.66 | 67500 | 0.6620 | 0.6620 | 0.6207 | 0.4937 | 0.5090 |
| 0.6475 | 0.69 | 70000 | 0.6286 | 0.6286 | 0.6037 | 0.5193 | 0.5223 |
| 0.6455 | 0.71 | 72500 | 0.7304 | 0.7304 | 0.6545 | 0.4414 | 0.4863 |
| 0.6464 | 0.74 | 75000 | 0.6246 | 0.6246 | 0.6006 | 0.5223 | 0.5199 |
| 0.646 | 0.76 | 77500 | 0.6414 | 0.6414 | 0.6124 | 0.5095 | 0.5126 |
| 0.6502 | 0.79 | 80000 | 0.6131 | 0.6131 | 0.5988 | 0.5311 | 0.5245 |
| 0.6443 | 0.81 | 82500 | 0.6376 | 0.6376 | 0.6064 | 0.5123 | 0.5229 |
| 0.641 | 0.83 | 85000 | 0.6399 | 0.6399 | 0.6096 | 0.5106 | 0.5163 |
| 0.6495 | 0.86 | 87500 | 0.6709 | 0.6709 | 0.6239 | 0.4869 | 0.5093 |
| 0.642 | 0.88 | 90000 | 0.6025 | 0.6025 | 0.5952 | 0.5392 | 0.5212 |
| 0.636 | 0.91 | 92500 | 0.6870 | 0.6870 | 0.6317 | 0.4746 | 0.5006 |
| 0.633 | 0.93 | 95000 | 0.6190 | 0.6190 | 0.5949 | 0.5266 | 0.5270 |
| 0.6316 | 0.96 | 97500 | 0.6053 | 0.6053 | 0.5926 | 0.5371 | 0.5280 |
| 0.6224 | 0.98 | 100000 | 0.6098 | 0.6098 | 0.5956 | 0.5336 | 0.5217 |
| 0.6304 | 1.01 | 102500 | 0.6124 | 0.6124 | 0.5949 | 0.5317 | 0.5280 |
| 0.6238 | 1.03 | 105000 | 0.6138 | 0.6138 | 0.5950 | 0.5306 | 0.5313 |
| 0.6228 | 1.06 | 107500 | 0.6302 | 0.6302 | 0.6038 | 0.5180 | 0.5189 |
| 0.6218 | 1.08 | 110000 | 0.6198 | 0.6198 | 0.5958 | 0.5260 | 0.5274 |
| 0.6164 | 1.1 | 112500 | 0.6045 | 0.6045 | 0.5895 | 0.5377 | 0.5327 |
| 0.6295 | 1.13 | 115000 | 0.6040 | 0.6040 | 0.5884 | 0.5381 | 0.5352 |
| 0.614 | 1.15 | 117500 | 0.5956 | 0.5956 | 0.5863 | 0.5445 | 0.5346 |
| 0.6016 | 1.18 | 120000 | 0.6208 | 0.6208 | 0.5994 | 0.5252 | 0.5246 |
| 0.6103 | 1.2 | 122500 | 0.6060 | 0.6060 | 0.5888 | 0.5366 | 0.5343 |
| 0.614 | 1.23 | 125000 | 0.6198 | 0.6198 | 0.5995 | 0.5259 | 0.5293 |
| 0.6113 | 1.25 | 127500 | 0.6010 | 0.6010 | 0.5874 | 0.5403 | 0.5340 |
| 0.6131 | 1.28 | 130000 | 0.6118 | 0.6118 | 0.5926 | 0.5321 | 0.5303 |
| 0.6069 | 1.3 | 132500 | 0.5914 | 0.5914 | 0.5815 | 0.5477 | 0.5406 |
| 0.6016 | 1.33 | 135000 | 0.5908 | 0.5908 | 0.5825 | 0.5482 | 0.5417 |
| 0.6053 | 1.35 | 137500 | 0.6166 | 0.6166 | 0.5939 | 0.5285 | 0.5317 |
| 0.5927 | 1.37 | 140000 | 0.5910 | 0.5910 | 0.5840 | 0.5480 | 0.5392 |
| 0.5942 | 1.4 | 142500 | 0.5965 | 0.5965 | 0.5856 | 0.5438 | 0.5387 |
| 0.5966 | 1.42 | 145000 | 0.6121 | 0.6121 | 0.5923 | 0.5319 | 0.5358 |
| 0.5941 | 1.45 | 147500 | 0.5889 | 0.5889 | 0.5814 | 0.5496 | 0.5373 |
| 0.6007 | 1.47 | 150000 | 0.5833 | 0.5833 | 0.5770 | 0.5539 | 0.5436 |
| 0.6024 | 1.5 | 152500 | 0.5862 | 0.5862 | 0.5786 | 0.5517 | 0.5423 |
| 0.5896 | 1.52 | 155000 | 0.5913 | 0.5913 | 0.5813 | 0.5478 | 0.5429 |
| 0.5906 | 1.55 | 157500 | 0.5944 | 0.5944 | 0.5854 | 0.5454 | 0.5373 |
| 0.5847 | 1.57 | 160000 | 0.5989 | 0.5989 | 0.5845 | 0.5419 | 0.5398 |
| 0.5837 | 1.6 | 162500 | 0.5914 | 0.5914 | 0.5822 | 0.5477 | 0.5394 |
| 0.5928 | 1.62 | 165000 | 0.5888 | 0.5888 | 0.5798 | 0.5497 | 0.5424 |
| 0.585 | 1.64 | 167500 | 0.5952 | 0.5952 | 0.5829 | 0.5448 | 0.5391 |
| 0.5929 | 1.67 | 170000 | 0.5829 | 0.5829 | 0.5768 | 0.5542 | 0.5440 |
| 0.5886 | 1.69 | 172500 | 0.5831 | 0.5831 | 0.5783 | 0.5540 | 0.5428 |
| 0.5793 | 1.72 | 175000 | 0.5857 | 0.5857 | 0.5776 | 0.5520 | 0.5453 |
| 0.5805 | 1.74 | 177500 | 0.5746 | 0.5746 | 0.5727 | 0.5606 | 0.5489 |
| 0.5875 | 1.77 | 180000 | 0.5798 | 0.5798 | 0.5739 | 0.5566 | 0.5487 |
| 0.5898 | 1.79 | 182500 | 0.5818 | 0.5818 | 0.5746 | 0.5550 | 0.5475 |
| 0.5884 | 1.82 | 185000 | 0.5736 | 0.5736 | 0.5722 | 0.5613 | 0.5496 |
| 0.5757 | 1.84 | 187500 | 0.5816 | 0.5816 | 0.5756 | 0.5552 | 0.5464 |
| 0.5789 | 1.87 | 190000 | 0.5846 | 0.5846 | 0.5774 | 0.5529 | 0.5448 |
| 0.575 | 1.89 | 192500 | 0.5866 | 0.5866 | 0.5779 | 0.5513 | 0.5443 |
| 0.5836 | 1.91 | 195000 | 0.5815 | 0.5815 | 0.5764 | 0.5552 | 0.5470 |
| 0.573 | 1.94 | 197500 | 0.5805 | 0.5805 | 0.5749 | 0.5561 | 0.5493 |
| 0.5728 | 1.96 | 200000 | 0.5808 | 0.5808 | 0.5757 | 0.5558 | 0.5474 |
| 0.5711 | 1.99 | 202500 | 0.5790 | 0.5790 | 0.5751 | 0.5572 | 0.5465 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
lora-library/walter-white-dreambooth
|
lora-library
| null | 71 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
| false | true | true | 504 |
# LoRA DreamBooth - walter-white
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "break bad" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: break bad




|
dougtrajano/toxic-comment-classification
|
dougtrajano
|
bert
| 10 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['pt']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['toxicity', 'portuguese', 'hate speech', 'offensive language', 'generated_from_trainer']
| true | true | true | 2,254 |
# dougtrajano/toxic-comment-classification
Toxic Comment Classification is a model that detects if the text is toxic or not.
This BERT model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the [OLID-BR dataset](https://huggingface.co/datasets/dougtrajano/olid-br).
## Overview
**Input:** Text in Brazilian Portuguese
**Output:** Binary classification (toxic or not toxic)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("dougtrajano/toxic-comment-classification")
model = AutoModelForSequenceClassification.from_pretrained("dougtrajano/toxic-comment-classification")
```
## Limitations and bias
The following factors may degrade the model’s performance.
**Text Language**: The model was trained on Brazilian Portuguese texts, so it may not work well with Portuguese dialects.
**Text Origin**: The model was trained on texts from social media and a few texts from other sources, so it may not work well on other types of texts.
## Trade-offs
Sometimes models exhibit performance issues under particular circumstances. In this section, we'll discuss situations in which you might discover that the model performs less than optimally, and should plan accordingly.
**Text Length**: The model was fine-tuned on texts with a word count between 1 and 178 words (average of 18 words). It may give poor results on texts with a word count outside this range.
## Performance
The model was evaluated on the test set of the [OLID-BR](https://dougtrajano.github.io/olid-br/) dataset.
**Accuracy:** 0.8578
**Precision:** 0.8594
**Recall:** 0.8578
**F1-Score:** 0.8580
| Class | Precision | Recall | F1-Score | Support |
| :---: | :-------: | :----: | :------: | :-----: |
| `NOT-OFFENSIVE` | 0.8886 | 0.8490 | 0.8683 | 1,775 |
| `OFFENSIVE` | 0.8233 | 0.8686 | 0.8453 | 1,438 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.255788747459486e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1993
- optimizer: Adam with betas=(0.8445637934160373,0.8338816842140165) and epsilon=2.527092625455385e-08
- lr_scheduler_type: linear
- num_epochs: 30
- label_smoothing_factor: 0.07158711257743958
### Framework versions
- Transformers 4.26.0
- Pytorch 1.10.2+cu113
- Datasets 2.9.0
- Tokenizers 0.13.2
## Provide Feedback
If you have any feedback on this model, please [open an issue](https://github.com/DougTrajano/ToChiquinho/issues/new) on GitHub.
|
mqy/mt5-small-finetuned-11feb-1
|
mqy
|
mt5
| 14 | 5 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 1,593 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-11feb-1
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5110
- Rouge1: 17.54
- Rouge2: 5.46
- Rougel: 17.42
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 5.4156 | 1.0 | 311 | 2.6628 | 15.35 | 5.18 | 15.25 |
| 3.366 | 2.0 | 622 | 2.5576 | 16.92 | 5.18 | 16.8 |
| 3.1718 | 3.0 | 933 | 2.5174 | 16.96 | 5.48 | 16.83 |
| 3.0648 | 4.0 | 1244 | 2.5021 | 17.32 | 5.34 | 17.12 |
| 3.0095 | 5.0 | 1555 | 2.5110 | 17.54 | 5.46 | 17.42 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
AngelUrq/ppo-Huggy
|
AngelUrq
| null | 32 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 819 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: AngelUrq/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ziyu600601/etreyrt
|
ziyu600601
| null | 16 | 8 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'text-to-image', 'image-to-image', 'diffusers']
| false | true | true | 4,567 |
# Diffusion model
This model is trained with high quality and detailed anime images.
## Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI run EimisAnimeDiffusion_1.0v:
[](https://huggingface.co/spaces/akhaliq/EimisAnimeDiffusion_1.0v)
# Sample generations
This model works well on anime and landscape generations.<br>
Anime:<br>
There are some sample generations:<br>
```
Positive:a girl, Phoenix girl, fluffy hair, war, a hell on earth, Beautiful and detailed explosion, Cold machine, Fire in eyes, burning, Metal texture, Exquisite cloth, Metal carving, volume, best quality, normal hands, Metal details, Metal scratch, Metal defects, masterpiece, best quality, best quality, illustration, highres, masterpiece, contour deepening, illustration,(beautiful detailed girl),beautiful detailed glow
Negative:lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls))
Steps: 20, Sampler: DPM++ 2S a, CFG scale: 8, Seed: 4186044705/4186044707, Size: 704x896
```
<img src=https://imgur.com/2U295w3.png width=75% height=75%>
<img src=https://imgur.com/2jtF376.png width=75% height=75%>
```
Positive:(1girl), cute, walking in the park, (night), full moon, north star, blue shirt, red skirt, detailed shirt, jewelry, autumn, dark blue hair, shirt hair, (magic:1.5), beautiful blue eyes
Negative: lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls))
Steps: 35, Sampler: Euler a, CFG scale: 9, Seed: 296195494, Size: 768x960
```
<img src=https://imgur.com/gudKxQe.png width=75% height=75%>
```
Positive:night , ((1 girl)), alone, masterpiece, 8k wallpaper, highres, absurdres, high quality background, short hair, black hair, multicolor hair, beautiful frozen village, (full bright moon), blue dress, detailed dress, jewelry dress, (magic:1.2), blue fire, blue eyes, glowing eyes, fire, ice goddess, (blue detailed beautiful crown), electricity, blue electricity, blue light particles
Negative: lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls))
Steps: 20, Sampler: DPM++ 2S a Karras, CFG scale: 9, Seed: 2118767319, Size: 768x832
```
<img src=https://imgur.com/lJL4CJL.png width=75% height=75%>
Want to generate some amazing backgrounds? No problem:
```
Positive: above clouds, mountains, (night), full moon, castle, huge forest, forest between mountains, beautiful, masterpiece
Negative: lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls))
Steps: 20, Sampler: DPM++ 2S a Karras, CFG scale: 9, Seed: 83644543, Size: 896x640
```
<img src=https://imgur.com/XfxAx0S.png width=75% height=75%>
## Disclaimer
Some prompts might not work perfectly (mainly colors), so add some more prompts for it to work, or try these -->().
Usually they help. Also works well with img2img if you want to add detail.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
Seungjun/t5-small-finetuned-t5-Thor4
|
Seungjun
|
t5
| 12 | 7 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,592 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-t5-Thor4
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5607
- Rouge1: 30.1917
- Rouge2: 17.6334
- Rougel: 26.8513
- Rougelsum: 28.7606
- Gen Len: 18.9881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.9251 | 1.0 | 675 | 1.6082 | 29.3372 | 16.9607 | 26.1096 | 27.9357 | 18.9874 |
| 1.763 | 2.0 | 1350 | 1.5696 | 30.1869 | 17.5627 | 26.8425 | 28.7413 | 18.9881 |
| 1.7139 | 3.0 | 2025 | 1.5607 | 30.1917 | 17.6334 | 26.8513 | 28.7606 | 18.9881 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
figfig/local_test_model_with_local_dataset
|
figfig
|
whisper
| 14 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,469 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# local_test_model_with_local_dataset
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5566
- Wer: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 10.0 | 10 | 3.4660 | 85.7143 |
| No log | 20.0 | 20 | 0.7373 | 10.7143 |
| 3.3998 | 30.0 | 30 | 0.5920 | 0.0 |
| 3.3998 | 40.0 | 40 | 0.5566 | 0.0 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ghproducts/ppo-LunarLander-v2
|
ghproducts
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PP0** Agent playing **LunarLander-v2**
This is a trained model of a **PP0** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
figfig/restaurant_test_at_local
|
figfig
|
whisper
| 20 | 1 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,458 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# restaurant_test_at_local
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5402
- Wer: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 10.0 | 10 | 3.7743 | 100.0 |
| No log | 20.0 | 20 | 0.6750 | 52.6316 |
| 3.6042 | 30.0 | 30 | 0.5724 | 0.0 |
| 3.6042 | 40.0 | 40 | 0.5402 | 0.0 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.11.0+cu115
- Datasets 2.9.0
- Tokenizers 0.13.2
|
surprisedPikachu007/crop_prediction
|
surprisedPikachu007
| null | 3 | 0 | null | 0 | null | false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 4,907 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
paulkm/autotrain-lottery_prod_v3-3409393337
|
paulkm
|
bert
| 8 | 7 |
transformers
| 0 |
text-classification
| true | false | false | null |
['zh']
|
['paulkm/autotrain-data-lottery_prod_v3']
|
{'emissions': 3.67386840637788}
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['autotrain', 'text-classification']
| false | true | true | 946 |
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 3409393337
- CO2 Emissions (in grams): 3.6739
## Validation Metrics
- Loss: 0.244
- Accuracy: 0.909
- Precision: 0.922
- Recall: 0.875
- AUC: 0.953
- F1: 0.898
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/paulkm/autotrain-lottery_prod_v3-3409393337
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("paulkm/autotrain-lottery_prod_v3-3409393337", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("paulkm/autotrain-lottery_prod_v3-3409393337", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
yizhangliu/poca-SoccerTwos-v9
|
yizhangliu
| null | 24 | 63 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 847 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: yizhangliu/poca-SoccerTwos-v9
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
shields/whisper-medium-catalan
|
shields
|
whisper
| 18 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['hi']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['hf-asr-leaderboard', 'generated_from_trainer']
| true | true | true | 1,293 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Catalan
This model is a fine-tuned version of [openai/whisper-Medium](https://huggingface.co/openai/whisper-Medium) on the 10 hrs of Catalan Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 4.9217
- eval_wer: 132.1947
- eval_runtime: 3848.0596
- eval_samples_per_second: 0.78
- eval_steps_per_second: 0.78
- epoch: 1.14
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
kuhs/desci2
|
kuhs
| null | 4 | 0 |
sklearn
| 0 |
tabular-classification
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sklearn', 'skops', 'tabular-classification']
| false | true | true | 6,147 |
# Model description
[More Information Needed]
## Intended uses & limitations
[More Information Needed]
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|--------------------------|---------|
| ccp_alpha | 0.0 |
| class_weight | |
| criterion | gini |
| max_depth | |
| max_features | |
| max_leaf_nodes | |
| min_impurity_decrease | 0.0 |
| min_samples_leaf | 1 |
| min_samples_split | 2 |
| min_weight_fraction_leaf | 0.0 |
| random_state | |
| splitter | best |
</details>
### Model Plot
The model plot is below.
<style>#sk-container-id-2 {color: black;background-color: white;}#sk-container-id-2 pre{padding: 0;}#sk-container-id-2 div.sk-toggleable {background-color: white;}#sk-container-id-2 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-2 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-2 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-2 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-2 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-2 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-2 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-2 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-2 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-2 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-2 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-2 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-2 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-2 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-2 div.sk-item {position: relative;z-index: 1;}#sk-container-id-2 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-2 div.sk-item::before, #sk-container-id-2 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-2 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-2 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-2 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-2 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-2 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-2 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-2 div.sk-label-container {text-align: center;}#sk-container-id-2 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-2 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-2" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>DecisionTreeClassifier()</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" checked><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">DecisionTreeClassifier</label><div class="sk-toggleable__content"><pre>DecisionTreeClassifier()</pre></div></div></div></div></div>
## Evaluation Results
[More Information Needed]
# How to Get Started with the Model
[More Information Needed]
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
```
# model_description
some random description
|
c-q/q-FrozenLake-v1-4x4-noSlippery
|
c-q
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 392 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="c-q/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Duskfallcrew/isometric-dreams
|
Duskfallcrew
| null | 21 | 21 |
diffusers
| 1 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 1,194 |
[](https://huggingface.co/spaces/Duskfallcrew/isometric-dreams)
### Isometric Dreams Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-1-512 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
All samples and info are here:
https://civitai.com/user/duskfallcrew
If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
duskametrik (use that on your prompt)
|
FredZhang7/danbooru-tag-generator
|
FredZhang7
|
gpt2
| 5 | 0 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
|
['en']
|
['FredZhang7/anime-prompts-180K']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'anime', 'art']
| false | true | true | 595 |
## Disclaimer
Danbooru stores millions of tagged anime images, but it doesn't have a way to filter out NSFW content. This model was trained on 100,000 of these tags with up_score ≥ 3 for 3 epochs, so it's possible that some tags might contain NSFW descriptions.
So, just be mindful of that. Thank you for your cooperation.
## The Safe Version
For details on data preprocessing, prompt engineering, and more, please see [Fast Anime PromptGen](https://huggingface.co/FredZhang7/anime-anything-promptgen-v2).
I used a very similar approach to train the Danbooru version.
|
kaliputra/q-FrozenLake-v1-4x4-noSlippery
|
kaliputra
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 398 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="kaliputra/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
c-q/Taxi-v3
|
c-q
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 369 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="c-q/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kaliputra/q-Taxi-v3-v1
|
kaliputra
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 368 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="kaliputra/q-Taxi-v3-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
pittawat/Reinforce-pixelcopter
|
pittawat
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Apes07/my_awesome_qa_model
|
Apes07
|
distilbert
| 36 | 2 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 908 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
kuhs/my-awesome-model
|
kuhs
| null | 5 | 0 |
sklearn
| 0 |
tabular-classification
| false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sklearn', 'skops', 'tabular-classification']
| false | true | true | 6,777 |
# Model description
[More Information Needed]
## Intended uses & limitations
[More Information Needed]
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|--------------------------|---------|
| ccp_alpha | 0.0 |
| class_weight | |
| criterion | gini |
| max_depth | |
| max_features | |
| max_leaf_nodes | |
| min_impurity_decrease | 0.0 |
| min_samples_leaf | 1 |
| min_samples_split | 2 |
| min_weight_fraction_leaf | 0.0 |
| random_state | |
| splitter | best |
</details>
### Model Plot
The model plot is below.
<style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>DecisionTreeClassifier()</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" checked><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">DecisionTreeClassifier</label><div class="sk-toggleable__content"><pre>DecisionTreeClassifier()</pre></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|----------|
| accuracy | 0.912281 |
| f1 score | 0.912281 |
# How to Get Started with the Model
[More Information Needed]
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
```
# citation_bibtex
bibtex
@inproceedings{...,year={2020}}
# get_started_code
import pickle
with open(dtc_pkl_filename, 'rb') as file:
clf = pickle.load(file)
# model_card_authors
skops_user
# limitations
This model is not ready to be used in production.
# model_description
This is a DecisionTreeClassifier model trained on breast cancer dataset.
# eval_method
The model is evaluated using test split, on accuracy and F1 score with macro average.
# confusion_matrix

|
pittawat/ppo-SnowballTarget
|
pittawat
| null | 30 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 855 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: pittawat/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kaliputra/q-Taxi-v3-v2
|
kaliputra
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 368 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="kaliputra/q-Taxi-v3-v2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Sjdan/finetuning11
|
Sjdan
|
wav2vec2
| 12 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,122 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning11
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00024
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 0.0 | 0.31 | 500 | nan | 1.0 |
| 0.0 | 0.61 | 1000 | nan | 1.0 |
| 0.0 | 0.92 | 1500 | nan | 1.0 |
| 0.0 | 1.23 | 2000 | nan | 1.0 |
| 0.0 | 1.54 | 2500 | nan | 1.0 |
| 0.0 | 1.84 | 3000 | nan | 1.0 |
| 0.0 | 2.15 | 3500 | nan | 1.0 |
| 0.0 | 2.46 | 4000 | nan | 1.0 |
| 0.0 | 2.77 | 4500 | nan | 1.0 |
| 0.0 | 3.07 | 5000 | nan | 1.0 |
| 0.0 | 3.38 | 5500 | nan | 1.0 |
| 0.0 | 3.69 | 6000 | nan | 1.0 |
| 0.0 | 4.0 | 6500 | nan | 1.0 |
| 0.0 | 4.3 | 7000 | nan | 1.0 |
| 0.0 | 4.61 | 7500 | nan | 1.0 |
| 0.0 | 4.92 | 8000 | nan | 1.0 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
erniechiew/a2c-PandaReachDense-v2
|
erniechiew
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
enankobh1/whisper-small-ASR
|
enankobh1
|
whisper
| 26 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,467 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-ASR
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7176
- Wer: 112.3086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0201 | 6.8 | 1000 | 0.4782 | 24.0338 |
| 0.0006 | 13.61 | 2000 | 0.6535 | 76.6110 |
| 0.0002 | 20.41 | 3000 | 0.7004 | 102.1109 |
| 0.0002 | 27.21 | 4000 | 0.7176 | 112.3086 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
pittawat/ppo-PyramidsTraining1
|
pittawat
| null | 16 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
| false | true | true | 840 |
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: pittawat/ppo-PyramidsTraining1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
chist/poca-SoccerTwos
|
chist
| null | 22 | 56 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 839 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: chist/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Sabbir29/distilbert-base-uncased-finetuned-squad
|
Sabbir29
|
distilbert
| 22 | 2 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad_bn']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,174 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_bn dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2709 | 1.0 | 8703 | 1.4896 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mrizalf7/FirstTextClassification
|
mrizalf7
|
distilbert
| 12 | 6 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,274 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FirstTextClassification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2312
- Accuracy: 0.9313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.231 | 1.0 | 1563 | 0.1894 | 0.9268 |
| 0.1514 | 2.0 | 3126 | 0.2312 | 0.9313 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Rubywong123/q-FrozenLake-v1-4x4-noSlippery
|
Rubywong123
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 400 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Rubywong123/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Rubywong123/q-Taxi-v3
|
Rubywong123
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 367 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Rubywong123/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kubasvehla/distilbert-base-uncased-finetuned-emotion
|
kubasvehla
|
distilbert
| 14 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,344 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2288
- Accuracy: 0.9225
- F1: 0.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8577 | 1.0 | 250 | 0.3264 | 0.903 | 0.8992 |
| 0.2559 | 2.0 | 500 | 0.2288 | 0.9225 | 0.9226 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Sjdan/finetuning12
|
Sjdan
|
wav2vec2
| 23 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,562 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning12
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00024
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 0.0 | 0.31 | 500 | nan | 1.0 |
| 0.0 | 0.61 | 1000 | nan | 1.0 |
| 0.0 | 0.92 | 1500 | nan | 1.0 |
| 0.0 | 1.23 | 2000 | nan | 1.0 |
| 0.0 | 1.54 | 2500 | nan | 1.0 |
| 0.0 | 1.84 | 3000 | nan | 1.0 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
LucianoDeben/Reinforce-cartpole-v3
|
LucianoDeben
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
lilDenden/ppo-LunarLander-v2
|
lilDenden
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pittawat/a2c-AntBulletEnv-v0
|
pittawat
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
taron88/CCCmix
|
taron88
| null | 3 | 0 | null | 0 | null | false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 341 |
公開されているモデルを単純マージしたモデルです。
7th v3.0 CをベースにCinnamonmixとCounterfeit-V2.5をマージしました。
7th v3.0 CをAに配置してB,CにCinnamonとcounterfeitを配置。
設定はWeighted sumの0.5だったと思います。
7thCのアニメ寄りのイラストはそのままにCinnamonの塗りと雰囲気、counterfeitの背景の精度を目指しました。
https://s3.amazonaws.com/moonup/production/uploads/1676112658952-6315eee0e06cb6c5c424344d.jpeg
---
license: other
---
|
hanselgm/autotrain-nlp-exercise-3413793400
|
hanselgm
|
bert
| 8 | 30 |
transformers
| 0 |
text-classification
| true | false | false | null |
['en']
|
['hanselgm/autotrain-data-nlp-exercise']
|
{'emissions': 7.21478572426289}
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['autotrain', 'text-classification']
| false | true | true | 1,093 |
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 3413793400
- CO2 Emissions (in grams): 7.2148
## Validation Metrics
- Loss: 0.311
- Accuracy: 0.896
- Macro F1: 0.861
- Micro F1: 0.896
- Weighted F1: 0.892
- Macro Precision: 0.912
- Micro Precision: 0.896
- Weighted Precision: 0.898
- Macro Recall: 0.828
- Micro Recall: 0.896
- Weighted Recall: 0.896
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/hanselgm/autotrain-nlp-exercise-3413793400
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("hanselgm/autotrain-nlp-exercise-3413793400", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("hanselgm/autotrain-nlp-exercise-3413793400", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Sa1i/gakki-mix-512-young
|
Sa1i
| null | 22 | 2 |
diffusers
| 1 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion', 'gakki']
| false | true | true | 529 |
# VAE
Highly recommended for use with VAE
# legal & risk
⚠️⚠ It is prohibited to use this model for commercial purposes and any scenarios of illegal acts and purposes.
Sample pictures of this concept:



|
deprem-ml/deprem-keras-satellite-semantic-mapping
|
deprem-ml
| null | 3 | 0 |
keras
| 0 |
image-segmentation
| false | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Segments', 'mapping', 'keras', 'object-segmentation']
| false | true | true | 272 |
Kaggle Notebook: https://www.kaggle.com/code/kmader/segmenting-buildings-in-satellite-images
Dataset: https://www.kaggle.com/datasets/kmader/synthetic-word-ocr
Hugging Face Space: https://huggingface.co/spaces/deprem-ml/deprem_keras-satellite_semantic_mapping-challange
|
jojoUla/bert-large-cased-sigir-support-refute-no-label-40
|
jojoUla
|
bert
| 14 | 0 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,247 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-sigir-support-refute-no-label-40
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.4511 | 1.0 | 252 | 2.0790 |
| 2.0373 | 2.0 | 504 | 1.8538 |
| 1.8052 | 3.0 | 756 | 1.6633 |
| 1.6663 | 4.0 | 1008 | 1.5591 |
| 1.5556 | 5.0 | 1260 | 1.4441 |
| 1.4505 | 6.0 | 1512 | 1.3836 |
| 1.3619 | 7.0 | 1764 | 1.3255 |
| 1.2968 | 8.0 | 2016 | 1.2505 |
| 1.2332 | 9.0 | 2268 | 1.2165 |
| 1.1788 | 10.0 | 2520 | 1.1517 |
| 1.1408 | 11.0 | 2772 | 1.1446 |
| 1.0992 | 12.0 | 3024 | 1.1512 |
| 1.0578 | 13.0 | 3276 | 1.1058 |
| 1.0277 | 14.0 | 3528 | 1.0662 |
| 1.0036 | 15.0 | 3780 | 1.0270 |
| 0.9655 | 16.0 | 4032 | 1.0207 |
| 0.9364 | 17.0 | 4284 | 1.0220 |
| 0.9085 | 18.0 | 4536 | 0.9874 |
| 0.8897 | 19.0 | 4788 | 0.9658 |
| 0.8661 | 20.0 | 5040 | 0.9603 |
| 0.8434 | 21.0 | 5292 | 0.9754 |
| 0.8248 | 22.0 | 5544 | 0.9406 |
| 0.8052 | 23.0 | 5796 | 0.9154 |
| 0.7975 | 24.0 | 6048 | 0.8760 |
| 0.7854 | 25.0 | 6300 | 0.8688 |
| 0.7673 | 26.0 | 6552 | 0.8536 |
| 0.7463 | 27.0 | 6804 | 0.8544 |
| 0.7412 | 28.0 | 7056 | 0.8514 |
| 0.7319 | 29.0 | 7308 | 0.8356 |
| 0.7143 | 30.0 | 7560 | 0.8832 |
| 0.7081 | 31.0 | 7812 | 0.8421 |
| 0.7026 | 32.0 | 8064 | 0.8295 |
| 0.687 | 33.0 | 8316 | 0.8401 |
| 0.6882 | 34.0 | 8568 | 0.8053 |
| 0.679 | 35.0 | 8820 | 0.8438 |
| 0.6672 | 36.0 | 9072 | 0.8450 |
| 0.6669 | 37.0 | 9324 | 0.8231 |
| 0.6665 | 38.0 | 9576 | 0.8410 |
| 0.6596 | 39.0 | 9828 | 0.7909 |
| 0.6556 | 40.0 | 10080 | 0.8019 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
bluepen5805/blue_pencil
|
bluepen5805
| null | 9 | 0 | null | 11 |
text-to-image
| false | false | false |
creativeml-openrail-m
|
['ja']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'text-to-image']
| false | true | true | 3,071 |
# blue_pencil
<strong>blue_pencil</strong> は、様々なモデルを適当な配合でマージしたモデルです。
有名なモデルをいくつか思い浮かべてください。
あなたが思い浮かべたモデルは、恐らくこのモデルに含まれています。
このマージモデルの特徴はわかりません。
いろいろなモデルをマージしてみることが目的なので、質も高くありません。
全てのモデルは [stable-diffusion-webui-model-tookit](https://github.com/arenatemp/stable-diffusion-webui-model-toolkit) を用いて `fp16` にしています。
---
<details open><summary><h2 style="display: inline;"><code>blue_pencil-v2b</code> <small>(<code>@20230219</code>)</small></h2></summary>
`blue_pencil-v2` の [Balor-V2](https://huggingface.co/ploughB660/Balor-V2) の代わりに [Balor-V3](https://huggingface.co/ploughB660/Balor-V3) を階層マージしたモデルです
表現力が上がっている気がします
### 推奨設定
* VAE: [vae-ft-mse-840000-ema-pruned](https://huggingface.co/stabilityai/sd-vae-ft-mse-original)
* Negative Embedding: [EasyNegative](https://huggingface.co/datasets/gsdf/EasyNegative)
### 出力例
```
girl, berlin, scenery
Negative prompt: EasyNegative
Steps: 30, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 3164975857
Size: 768x768, Clip skip: 2
Denoising strength: 0.65, Hires upscale: 2, Hires upscaler: Latent (nearest-exact)
```

</details>
<details open><summary><h2 style="display: inline;"><code>blue_pencil-v2</code> <small>(<code>@20230217</code>)</small></h2></summary>
[AbyssOrangeMix3A1](https://huggingface.co/WarriorMama777/OrangeMixs) をベースに配合しなおしたモデルです。
気持ち程度に自由度が上がっている気がします(同時に破綻率も上がってる気がする)
基本的には v1 系と同じ雰囲気の絵になる気がします。
以下のモデルが含まれています(順不同)
<details><summary>マージ元モデル一覧</summary>
* [AbyssOrangeMix3A1](https://huggingface.co/WarriorMama777/OrangeMixs)
* AnythingV3.0
* ChilloutMix
* GAPE60
* Counterfeit2.5
* Kenshi
* [Evt_M](https://huggingface.co/haor/Evt_M)
* Evt_V4
* ACertainty
* [GingerMixR](https://huggingface.co/Hemlok/GingerMix)
* LimeMixV2
* [Elysium_Anime_V3](https://huggingface.co/hesw23168/SD-Elysium-Model)
* [VaLJMix](https://huggingface.co/Hemlok/VaLMix)
* pastel-mix
* ACertainThing
* basil_mix
* Counterfeit-V2.5
* openjourney
* [HD-22](https://www.cognitionai.org/hdhowtogetstarted)
* [7th_anime_v3_testA](https://huggingface.co/syaimu/7th_test)
* [AniReal](https://huggingface.co/Hosioka/AniReal)
* [atwcustom_V4](https://huggingface.co/atsuwo/ATW-custom)
* [Nabylon-v1.2](https://huggingface.co/NegiInNattoMaki/Nabylon-v1.0)
* AbyssOrangeMix2
* LonganMix
* and more
* [TriPhaze_B](https://huggingface.co/Lucetepolis/TriPhaze)
* ultracolor.v4
* Counterfeit-V2.5
* Treebark
* [Balor-V2](https://huggingface.co/ploughB660/Balor-V2)
</details>
### 推奨設定
* VAE: [vae-ft-mse-840000-ema-pruned](https://huggingface.co/stabilityai/sd-vae-ft-mse-original)
* Negative Embedding: [EasyNegative](https://huggingface.co/datasets/gsdf/EasyNegative)
### 出力例
```
girl, tokyo, scenery
Negative prompt: EasyNegative
Steps: 30, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 205537258
Size: 768x768, Clip skip: 2
Denoising strength: 0.65, Hires upscale: 2, Hires upscaler: Latent (nearest-exact)
```

```
girl, spacesuit, beautiful earth, scenery, on the moon
Negative prompt: EasyNegative
Steps: 50, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 1069444343
Size: 960x640, Clip skip: 2
Denoising strength: 0.6, Hires upscale: 2, Hires upscaler: Latent (nearest-exact)
```

</details>
<details><summary><h2 style="display: inline;"><code>blue_pencil-v1b</code> <small>(<code>@20230212</code>)</small></h2></summary>
`blue_pencil-v1` の [Amalgam_Mix](https://civitai.com/models/4758/amalgammix) の代わりに [Balor-V2](https://huggingface.co/ploughB660/Balor-V2) を階層マージしたモデルです
v1 とはちょっと傾向が違います
### 推奨設定
* VAE: [vae-ft-mse-840000-ema-pruned](https://huggingface.co/stabilityai/sd-vae-ft-mse-original)
* Negative Embedding: [EasyNegative](https://huggingface.co/datasets/gsdf/EasyNegative)
### 出力例
```
girl, tokyo, scenery
Negative prompt: EasyNegative
Steps: 30, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 205537258
Size: 768x768, Clip skip: 2
Denoising strength: 0.65, Hires upscale: 2, Hires upscaler: Latent (nearest-exact)
```

</details>
<details><summary><h2 style="display: inline;"><code>blue_pencil-v1</code> <small>(<code>@20230211</code>)</small></h2></summary>
以下のモデルが含まれています(順不同)
<details><summary>マージ元モデル一覧</summary>
* [Defmix-v1.1](https://huggingface.co/Defpoint/Defmix-v1.0)
* Counterfeit v1.0
* Counterfeit v2.0
* Basil Mix
* Anything v4.0
* [PastelRainier](https://huggingface.co/Hemlok/RainierMix)
* ACertainThing
* Anything-V4.5
* Counterfeit-V2.0
* Evt_V4-preview
* basil_mix
* pastel-mix
* [GingerMixR](https://huggingface.co/Hemlok/GingerMix)
* LimeMixV2
* [Elysium_Anime_V3](https://huggingface.co/hesw23168/SD-Elysium-Model)
* [SukiyakiMix-v1.0](https://huggingface.co/Vsukiyaki/SukiyakiMix-v1.0)
* pastel-mix
* AbyssOrangeMix2
* [HD-20](https://www.cognitionai.org/hdhowtogetstarted)
* [7th_anime_v3_testA](https://huggingface.co/syaimu/7th_test)
* [AniReal](https://huggingface.co/Hosioka/AniReal)
* [TriPhaze_B](https://huggingface.co/Lucetepolis/TriPhaze)
* ultracolor.v4
* Counterfeit-V2.5
* Treebark
* [Nabylon-v1.2](https://huggingface.co/NegiInNattoMaki/Nabylon-v1.0)
* AbyssOrangeMix2
* LonganMix
* and more
* [atwcustom_V4](https://huggingface.co/atsuwo/ATW-custom)
* [Amalgam_Mix](https://civitai.com/models/4758/amalgammix)
</details>
### 推奨設定
* VAE: [vae-ft-mse-840000-ema-pruned](https://huggingface.co/stabilityai/sd-vae-ft-mse-original)
* Negative Embedding: [EasyNegative](https://huggingface.co/datasets/gsdf/EasyNegative)
### 出力例
#### 1
```
girl, tokyo, scenery
Negative prompt: EasyNegative
Steps: 30, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 2526423076
Size: 768x768, Clip skip: 2
```

##### Hires. fix
```
Denoising strength: 0.6, Hires upscale: 2, Hires upscaler: Latent (nearest-exact)
```

#### 2
```
girl, early teen, kimono, sakura, particles
Negative prompt: EasyNegative
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 4036639388,
Size: 512x768, Clip skip: 2
```

##### Hires. fix
```
Denoising strength: 0.62, Hires upscale: 2, Hires upscaler: Latent (nearest-exact)
```

#### 3
```
girl, early teen, t-shirt, pants, from behind, landscape, scenery, apocalyptic
Negative prompt: EasyNegative
Steps: 40, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 748447692,
Size: 768x512, Clip skip: 2
```

</details>
|
pittawat/a2c-PandaReachDense-v2
|
pittawat
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
atorre/poca-SoccerTwos-50M
|
atorre
| null | 26 | 50 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 844 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: atorre/poca-SoccerTwos-50M
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
deprem-ml/deprem_satellite_semantic_whu
|
deprem-ml
|
segformer
| 9 | 272 |
transformers
| 0 | null | true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,233 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deprem_satellite_semantic_whu
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0692
- eval_mean_iou: 0.8739
- eval_mean_accuracy: 0.9277
- eval_overall_accuracy: 0.9786
- eval_accuracy_background: 0.9888
- eval_accuracy_building: 0.8665
- eval_iou_background: 0.9770
- eval_iou_building: 0.7708
- eval_runtime: 124.6705
- eval_samples_per_second: 4.011
- eval_steps_per_second: 4.011
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ritesh27gole/ppo-LunarLander-v2
|
ritesh27gole
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MerlinTK/ppo-Huggy
|
MerlinTK
| null | 32 | 3 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 819 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: MerlinTK/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ploughB660/Balor-V2
|
ploughB660
| null | 7 | 0 | null | 9 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'text-to-image']
| false | true | true | 1,671 |
Balor-V2は他のモデルとマージすることで描画を向上させることを目標に作成されたマージモデルです。
AIアートにおいて、服装や背景の書き込みが増えるとキャラクターの目の描写があいまいになることが多くあります。
このモデルでは、特にこの目の描写の質を保つことを目標としています。
Balor-V2 is a merged model created with the goal of improving rendering by merging with other models.
In AI art, the depiction of a character's eyes often becomes fuzzy as more clothing and backgrounds are written.
This model specifically aims to maintain the quality of the depiction of these eyes.
<img src="https://i.imgur.com/QC5223V.jpg" width="450" height="">
<img src="https://i.imgur.com/e3Bj3fq.jpg" width="450" height="">
<img src="https://i.imgur.com/pM6KZYI.png" width="450" height="">
Balor-V2は以下の配分でほかの自由に選んだモデルModelAと階層マージを行うことで、そのモデルにBalor-V2の要素を与えることが可能です。
Balor-V2 can be hierarchically merged with another freely chosen model ModelA using the following distribution to give that model the elements of BalorV2.
| Model: A | Model: B | Weight | Base alpha | Merge Name |
| --- | --- | --- | --- | --- |
| ModelA | BalorV2 | 1,1,1,0,0,0,0,1,1,1,0,0,0,1,1,0,0,0,0,0,0,0,1,1,1 | 1 | BalorV2featModelA |
配布しているものはそれぞれAbyssOrangeMix2_sfw、Counterfeit-V2.5、Evt_MをModelAに置いています。
すべての要素のマージ比率が1のため、他のモデルModelBに対してBalorV2featModelAを使用して、BalorV2featModelBを作成することが可能です。
流行のマージモデルなども使用することができます。
The distributions are placed in ModelA with AbyssOrangeMix2_sfw, Counterfeit-V2.5, and Evt_M, respectively.
Since the merge ratio of all elements is 1, it is possible to create a BalorV2featModelB using BalorV2featModelA for the other model ModelB.
マージ比率の記述にあたりWarriorMama777/OrangeMixsのモデルカードを参考としました。感謝申し上げます。
I used the WarriorMama777/OrangeMixes model card as a reference to describe the merge ratio.
|
0RisingStar0/LiveArcaMix
|
0RisingStar0
| null | 11 | 0 |
diffusers
| 7 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'text-to-image', 'diffusers']
| false | true | true | 1,615 |
<p align="center"><img src="https://huggingface.co/0RisingStar0/LiveArcaMix/resolve/main/00122-1387049655-(masterpiece%2C%20best%20quality%2C%20excellent%20quality)%2C%20((1girl%2C%20solo%2C%20cowboy%20shot))%2C%20street%2C%20apartment%2C%20building%2C%20pavement%2C%20trees%2C%20flow.png">
<img src="https://huggingface.co/0RisingStar0/LiveArcaMix/resolve/main/00123-3601874250-(masterpiece%2C%20best%20quality%2C%20excellent%20quality)%2C%20((1girl%2C%20solo%2C%20cowboy%20shot))%2C%20street%2C%20apartment%2C%20building%2C%20pavement%2C%20trees%2C%20flow.png">
<img src="https://huggingface.co/0RisingStar0/LiveArcaMix/resolve/main/00128-3652958527-(masterpiece%2C%20best%20quality%2C%20excellent%20quality)%2C%20((1girl%2C%20solo%2C%20cowboy%20shot))%2C%20city%2C%20(skyscrapers)%2C%20sky%2C%20wide%20street%2C%20pavement%2C%20t.png">
<img src="https://huggingface.co/0RisingStar0/LiveArcaMix/resolve/main/00135-2463548988-(masterpiece%2C%20best%20quality%2C%20excellent%20quality)%2C%20((1girl%2C%20solo%2C%20cowboy%20shot))%2C%20city%2C%20(skyscrapers)%2C%20sky%2C%20wide%20street%2C%20pavement%2C%20t.png"></p>
<center><b>LiveArcaMix for channel competition.</b></center>
1. AikimiXPv1.0 + CounterfeitV2.0
- Preset GRAD_A base_alpha : 0
=> AModel
2. AModel + CounterfeitV2.5
- IN09, IN10, OUT01, OUT02 : 1, else : 0 base_alpha : 0
=> BModel
3. BModel + BasilMixFixed
- Preset MID12_50, base_alpha : 1
=> CModel
4. CModel + Anything V4.5
- Preset FLAT_75 base_alpha : 0
=> DModel
5. CounterfeitV2.5 + AikimiXPv1.0
- Preset FLAT_25
=> EModel
6. DModel + EModel
- Preset RING10_5
=> Result(LiveArcaMix)
|
kf1022/Crowbox-Vol.1
|
kf1022
| null | 10 | 0 | null | 1 |
text-to-image
| false | false | false |
creativeml-openrail-m
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'text-to-image']
| false | true | true | 6,627 |

based on Anything V4,
Made by several Loras, and block merge with community models
Using these negative prompt have actually resulted in significant improvements
https://huggingface.co/datasets/gsdf/EasyNegative
https://civitai.com/models/4629/deep-negative-v1x
Although I tried to reduce negative prompt,
but gave up because the difference was difficult to evaluate.
Maybe this is enough.
```
(EasyNegative:1.4), (NG_DeepNegative:1.4), (worst quality:1.4), (low quality:1.4) , (monochrome:1.1)
```

```
(masterpiece), (best quality), (illustration), (beautiful detailed), (highres), 1girl, solo, looking at viewer, sitting, sunset, (school uniform), long hair, red eyes, low twintails, ribbon, white shirt, (blue) skirt, black thighhighs, smile, blush, indoors, window, [building], classroom, table, chair, (coffee:0.5)
Negative prompt: (EasyNegative:1.4), (NG_DeepNegative:1.4), (worst quality:1.4), (low quality:1.4) , (monochrome:1.1), lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, normal quality, jpeg artifacts, (signature, watermark, username:1.4), blurry, bad feet, multiple breasts, (mutated hands and fingers:1.5 ), (long body :1.3), (mutation, poorly drawn :1.2) , black-white, liquid body, liquid tongue, disfigured, malformed, mutated, anatomical nonsense, text font ui, malformed hands, long neck, blurred, lowers, bad proportions, bad shadow, uncoordinated body, unnatural body, fused breasts, bad breasts, huge breasts, poorly drawn breasts, extra breasts, liquid breasts, heavy breasts, missing breasts, huge haunch, huge thighs, huge calf, fused hand, missing hand, (holding)
Steps: 20, Sampler: Euler a, CFG scale: 8.5, Seed: 2085854603, Size: 384x640, Model hash: 79fb704aeb, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires steps: 20, Hires upscaler: Latent, Eta: 0.67
```

```
(masterpiece), (best quality), (illustration), (beautiful detailed), (highres), 1girl, solo, (Middle Ages), outdoors, (rooftop), sitting on roof, phantom thieves, night, short hair, silver hair, blue eyes, curly hair, (top hat:1.2), (white cape), gloves, black pantyhose, smirk, street, horizon, castle, moon
Negative prompt: (EasyNegative:1.4), (NG_DeepNegative:1.4), (worst quality:1.4), (low quality:1.4) , (monochrome:1.1), lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, normal quality, jpeg artifacts, (signature, watermark, username:1.4), blurry, bad feet, multiple breasts, (mutated hands and fingers:1.5 ), (long body :1.3), (mutation, poorly drawn :1.2) , black-white, liquid body, liquid tongue, disfigured, malformed, mutated, anatomical nonsense, text font ui, malformed hands, long neck, blurred, lowers, bad proportions, bad shadow, uncoordinated body, unnatural body, fused breasts, bad breasts, huge breasts, poorly drawn breasts, extra breasts, liquid breasts, heavy breasts, missing breasts, huge haunch, huge thighs, huge calf, fused hand, missing hand, car, kneeing
Steps: 20, Sampler: Euler a, CFG scale: 8.5, Seed: 3008135755, Size: 768x512, Model hash: 79fb704aeb, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires steps: 20, Hires upscaler: Latent, Eta: 0.67
```

```
(masterpiece), (best quality), (illustration), (beautiful detailed), (highres), (1girl), (solo), sitting, upper body, (outdoors), afternoon tea, table, street, Middle Ages, gloves, (silver:1.2) long hair, (pink dress), beige hat, smile, upper teeth, see-through sleeves, castle, street, garden, (flower:1.1), dessert, (cake)
Negative prompt: (EasyNegative:1.4), (NG_DeepNegative:1.4), (worst quality:1.4), (low quality:1.4) , (monochrome:1.1), lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, normal quality, jpeg artifacts, (signature, watermark, username:1.4), blurry, bad feet, multiple breasts, (mutated hands and fingers:1.5 ), (long body :1.3), (mutation, poorly drawn :1.2) , black-white, liquid body, liquid tongue, disfigured, malformed, mutated, anatomical nonsense, text font ui, malformed hands, long neck, blurred, lowers, bad proportions, bad shadow, uncoordinated body, unnatural body, fused breasts, bad breasts, huge breasts, poorly drawn breasts, extra breasts, liquid breasts, heavy breasts, missing breasts, huge haunch, huge thighs, huge calf, fused hand, missing hand
Steps: 20, Sampler: Euler a, CFG scale: 8.5, Seed: 4194623658, Size: 512x704, Model hash: 79fb704aeb, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires steps: 20, Hires upscaler: lollypop, Eta: 0.67
```

```
(masterpiece), (best quality), (illustration), (beautiful detailed), (highres), 1girl, solo, hugging own legs, underwater, bubble, water, full body, long hair, silver hair, purple eyes, smile, hair ribbon, bare legs, sleeveless dress, frilled dress, barefoot, one eye closed
Negative prompt: (EasyNegative:1.4), (NG_DeepNegative:1.4), (worst quality:1.4), (low quality:1.4) , (monochrome:1.1), lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, normal quality, jpeg artifacts, (signature, watermark, username:1.4), blurry, bad feet, multiple breasts, (mutated hands and fingers:1.5 ), (long body :1.3), (mutation, poorly drawn :1.2) , black-white, liquid body, liquid tongue, disfigured, malformed, mutated, anatomical nonsense, text font ui, malformed hands, long neck, blurred, lowers, bad proportions, bad shadow, uncoordinated body, unnatural body, fused breasts, bad breasts, huge breasts, poorly drawn breasts, extra breasts, liquid breasts, heavy breasts, missing breasts, huge haunch, huge thighs, huge calf, fused hand, missing hand, black hair, fish, animal, creatue
Steps: 20, Sampler: Euler a, CFG scale: 8.5, Seed: 1169234669, Size: 512x512, Model hash: 79fb704aeb, Denoising strength: 0.65, Clip skip: 2, Hires upscale: 2, Hires steps: 20, Hires upscaler: lollypop, Eta: 0.67
```
|
OliP/a2c-AntBulletEnv-v0
|
OliP
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Duskfallcrew/duskfall-crew-visual-art-style-1-5
|
Duskfallcrew
| null | 21 | 17 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 930 |
### Duskfall Crew Visual Art Style 1.5 Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Trained on our own art
DO NOT SELL YOUR MERGES
DO NOT RESELL THIS MODEL
PLEASE GIVE CREDIT WHEN USING OR MERGING
IDGAF BEYOND THAT
If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
dskyart1 (use that on your prompt)
|
Mithul/Reinforce-PixelCopter
|
Mithul
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
pittawat/a2c-PandaReachDense-v2-2
|
pittawat
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gubartz/flan-t5-base7
|
gubartz
|
t5
| 15 | 10 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text2text-generation', 'generated_from_trainer']
| true | true | true | 3,173 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base7
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5607
- Rouge1: 70.524
- Rouge2: 51.8406
- Rougel: 68.8374
- Rougelsum: 68.7883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 6.6376 | 1.0 | 52 | 1.2371 | 24.2463 | 14.2174 | 24.334 | 24.2896 |
| 1.0438 | 2.0 | 104 | 0.6731 | 62.1063 | 45.7876 | 60.653 | 60.6363 |
| 0.7045 | 3.0 | 156 | 0.5891 | 67.5005 | 49.6879 | 65.7653 | 65.7592 |
| 0.6175 | 4.0 | 208 | 0.5638 | 67.1457 | 49.5298 | 65.6249 | 65.6293 |
| 0.5804 | 5.0 | 260 | 0.5450 | 67.9682 | 49.7911 | 66.3619 | 66.4208 |
| 0.5633 | 6.0 | 312 | 0.5612 | 66.2747 | 48.739 | 64.8014 | 64.8095 |
| 0.5414 | 7.0 | 364 | 0.5626 | 67.3805 | 49.4668 | 65.7599 | 65.7622 |
| 0.5255 | 8.0 | 416 | 0.5948 | 65.0301 | 47.8636 | 63.4687 | 63.4516 |
| 0.5227 | 9.0 | 468 | 0.5307 | 67.7462 | 49.6922 | 66.2863 | 66.2771 |
| 0.5074 | 10.0 | 520 | 0.5547 | 69.0972 | 50.9602 | 67.5046 | 67.5282 |
| 0.4939 | 11.0 | 572 | 0.5843 | 67.6059 | 49.376 | 66.0482 | 66.1111 |
| 0.4803 | 12.0 | 624 | 0.5369 | 67.7427 | 49.8254 | 66.2864 | 66.2938 |
| 0.4869 | 13.0 | 676 | 0.5271 | 71.3421 | 53.1967 | 69.7099 | 69.7405 |
| 0.475 | 14.0 | 728 | 0.6614 | 67.9896 | 49.8985 | 66.3579 | 66.3518 |
| 0.479 | 15.0 | 780 | 0.5576 | 68.446 | 50.1408 | 66.942 | 66.9502 |
| 0.4647 | 16.0 | 832 | 0.5501 | 70.5046 | 51.7277 | 68.836 | 68.8619 |
| 0.4637 | 17.0 | 884 | 0.6093 | 69.8509 | 50.9488 | 68.1152 | 68.0849 |
| 0.4588 | 18.0 | 936 | 0.5773 | 69.8538 | 51.1648 | 68.1531 | 68.13 |
| 0.4606 | 19.0 | 988 | 0.5621 | 70.5416 | 51.9142 | 68.8009 | 68.7511 |
| 0.4586 | 20.0 | 1040 | 0.5607 | 70.524 | 51.8406 | 68.8374 | 68.7883 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Celal11/resnet-50-4-32
|
Celal11
|
resnet
| 9 | 0 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['image_folder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,479 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-4-32
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9705
- Accuracy: 0.6410
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3833 | 1.0 | 224 | 1.2683 | 0.5134 |
| 1.2404 | 2.0 | 448 | 1.1342 | 0.5659 |
| 1.1492 | 3.0 | 672 | 1.0359 | 0.6087 |
| 1.1433 | 4.0 | 896 | 0.9705 | 0.6410 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
OliP/a2c-PandaReachDense-v2
|
OliP
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jmete/tweet_instruct_detect
|
jmete
|
bert
| 12 | 9 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,453 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tweet_instruct_detect
This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on an dataset combining manually labelled tweets into either instructions or spam, and pre-processed instructions from the flan dataset that are less than 250 characters long to be used as positive instructions.
It achieves the following results on the evaluation set:
- Loss: 0.1300
- Accuracy: 0.9761
## Model description
This model is trained to help determine if tweets are useful instructions. This can be used to filter the large corpus of tweet data online into useful instruction datasets for instruction fine-tuning.
## Intended uses & limitations
Intended to be used to determine if tweets are useful instructions.
The model will be biased towards english data, and maybe be biased towards certain ways of phrasing "instructions". Instructions in this case may also be questions.
Current version of the model is very basic and can get confused by simple things. For example, simply adding a ? character will bias it heavily towards an instruction, even if using the same sentence so it is highly sensitive to certain characters and ways of phrasing things. This can hopefully be fixed by better training data or model tuning.
## Training and evaluation data
Model was fine-tuned on a relatively small number of tweets and instructions.
Train data: 749 examples
Test data: 251 examples
Out of the total number of examples, 526 of them were manually labelled tweets, most of which were spam due to the high noise ratio in tweets.
Spam in this case can refer to actual spam, gibberish, or also statements that are generally fine but not useful as an instruction or question.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 47 | 0.3832 | 0.9562 |
| No log | 2.0 | 94 | 0.2004 | 0.9681 |
| No log | 3.0 | 141 | 0.1501 | 0.9721 |
| No log | 4.0 | 188 | 0.1362 | 0.9721 |
| No log | 5.0 | 235 | 0.1300 | 0.9761 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mili7522/Reinforce-CartPole-v1
|
mili7522
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
akghxhs55/poca-SoccerTwos
|
akghxhs55
| null | 20 | 49 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 843 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: akghxhs55/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Celal11/resnet-50-0.007
|
Celal11
|
resnet
| 9 | 0 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['image_folder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,480 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-0.007
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9735
- Accuracy: 0.6296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.007
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4221 | 1.0 | 224 | 1.2410 | 0.5274 |
| 1.2521 | 2.0 | 448 | 1.1716 | 0.5499 |
| 1.1609 | 3.0 | 672 | 1.0495 | 0.5968 |
| 1.1457 | 4.0 | 896 | 0.9735 | 0.6296 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ammr/ppo-Huggy
|
ammr
| null | 32 | 4 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 815 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ammr/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
iammartian0/a2c-AntBulletEnv-v0
|
iammartian0
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tiagoblima/punctuation-tedtalk2012-t5-base
|
tiagoblima
|
t5
| 15 | 10 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null |
['tiagoblima/punctuation-tedtalk2012-t5']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,232 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# punctuation-tedtalk2012-t5-base
This model is a fine-tuned version of [unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab) on the tiagoblima/punctuation-tedtalk2012-t5 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0348 | 1.0 | 77894 | 0.0399 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.9.0
- Tokenizers 0.13.2
|
GesturingMan/Reinforce-CartPole-v1
|
GesturingMan
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jmcneves/q-FrozenLake-v1-4x4-noSlippery
|
jmcneves
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 397 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jmcneves/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
davanstrien/autotrain-ia_covers-3416193421
|
davanstrien
|
vit
| 5 | 9 |
transformers
| 0 |
image-classification
| true | false | false | null | null |
['davanstrien/autotrain-data-ia_covers']
|
{'emissions': 1.69724123660189}
| 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['autotrain', 'vision', 'image-classification']
| false | true | true | 245 |
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 3416193421
- CO2 Emissions (in grams): 1.6972
## Validation Metrics
- Loss: 0.213
- Accuracy: 0.904
- Precision: 0.714
- Recall: 0.875
- AUC: 0.948
- F1: 0.787
|
Deysi/clasificador-muchocine
|
Deysi
|
electra
| 10 | 4 |
transformers
| 0 |
text-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['classification', 'generated_from_trainer']
| true | true | true | 1,367 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3029
- Accuracy: 0.4645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.5140 | 0.3161 |
| 1.496 | 2.0 | 776 | 1.2868 | 0.4194 |
| 1.1622 | 3.0 | 1164 | 1.3029 | 0.4645 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jmcneves/q-Taxi-v3
|
jmcneves
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 364 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jmcneves/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
akgeni/poca-SoccerTwos5
|
akgeni
| null | 7 | 43 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 841 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: akgeni/poca-SoccerTwos5
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
iammartian0/a2c-PandaReachDense-v2
|
iammartian0
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LarryAIDraw/lenaeightysix-000030
|
LarryAIDraw
| null | 3 | 0 | null | 0 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 271 |
my trained lora
use masterpiece,best quality,art by lenaeightysix,1girl,ahoge,very long hair,silver hair, long sleeves,hair between eyes, bangs,medium breasts, buttons,belt,thighhighs,military uniform,pantyhose,looking at viewer
more steps lora see my dataset. suggest 10
|
hectorjelly/Bert_and_Ernie_2
|
hectorjelly
| null | 25 | 41 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 846 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: hectorjelly/Bert_and_Ernie_2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sohm/a2c-AntBulletEnv-v0
|
sohm
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pfunk/Pong-v4-DQPN_p100-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 1,955 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p100.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p100]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p100 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p100 --start-policy-f 100000 --end-policy-f 100000 --evaluation-fraction 1.00 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 100000,
'env_id': 'Pong-v4',
'evaluation_fraction': 1.0,
'exp_name': 'DQPN_p100',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 100000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.