repo_id
stringlengths 4
122
| author
stringlengths 2
38
โ | model_type
stringlengths 2
33
โ | files_per_repo
int64 2
39k
| downloads_30d
int64 0
33.7M
| library
stringlengths 2
37
โ | likes
int64 0
4.87k
| pipeline
stringlengths 5
30
โ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
33
โ | languages
stringlengths 2
1.63k
โ | datasets
stringlengths 2
2.58k
โ | co2
stringlengths 6
258
โ | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
46
| prs_closed
int64 0
34
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 2
classes | has_text
bool 1
class | text_length
int64 201
598k
| readme
stringlengths 0
598k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
edraper88/distilbert-base-uncased-finetuned-imdb
|
edraper88
|
distilbert
| 16 | 5 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,318 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Zekunli/flan-t5-large-extraction-cnndm_8000-all
|
Zekunli
|
t5
| 10 | 0 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,552 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-extraction-cnndm_8000-all
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6960
- Rouge1: 35.1425
- Rouge2: 15.3877
- Rougel: 30.0992
- Rougelsum: 30.1879
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 24
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.1837 | 0.2 | 200 | 1.8342 | 33.7673 | 14.4744 | 28.8398 | 28.8415 | 19.0 |
| 1.9557 | 0.4 | 400 | 1.7798 | 34.3577 | 14.8613 | 29.769 | 29.766 | 18.986 |
| 1.9219 | 0.6 | 600 | 1.7428 | 34.8589 | 15.4488 | 30.1084 | 30.1336 | 18.99 |
| 1.871 | 0.8 | 800 | 1.7408 | 35.001 | 15.597 | 30.3374 | 30.37 | 18.99 |
| 1.8729 | 1.0 | 1000 | 1.7502 | 34.9305 | 15.5718 | 30.1495 | 30.1513 | 19.0 |
| 1.7803 | 1.2 | 1200 | 1.7261 | 35.7504 | 15.4172 | 30.6898 | 30.7362 | 19.0 |
| 1.7674 | 1.4 | 1400 | 1.7214 | 35.9564 | 15.6508 | 30.3541 | 30.4292 | 19.0 |
| 1.7704 | 1.6 | 1600 | 1.7253 | 35.2706 | 15.7274 | 30.118 | 30.1324 | 19.0 |
| 1.7656 | 1.8 | 1800 | 1.6960 | 35.1425 | 15.3877 | 30.0992 | 30.1879 | 19.0 |
| 1.7545 | 2.0 | 2000 | 1.7186 | 34.6436 | 15.2712 | 29.9781 | 29.9698 | 19.0 |
| 1.6739 | 2.2 | 2200 | 1.7245 | 35.4083 | 15.8808 | 30.6222 | 30.6752 | 19.0 |
| 1.6836 | 2.4 | 2400 | 1.7212 | 35.1829 | 15.5181 | 30.2438 | 30.262 | 19.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
mingdinghan/ppo-Huggy
|
mingdinghan
| null | 32 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 822 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: mingdinghan/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
HusseinHE/saad
|
HusseinHE
| null | 99 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 1,366 |
### Saad Dreambooth model trained by HusseinHE with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
sksaad (use that on your prompt)

|
css919/poca-SoccerTwos
|
css919
| null | 20 | 3 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 840 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: css919/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
amoselberg/ppo-SnowballTarget1
|
amoselberg
| null | 20 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 858 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: amoselberg/ppo-SnowballTarget1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
taqwa92/whisper-small-ArabicT12
|
taqwa92
|
whisper
| 16 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ar']
|
['taqwa92/tm_data']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['hf-asr-leaderboard', 'generated_from_trainer']
| true | true | true | 1,288 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Arabic- Taqwa
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the tm_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5530
- Wer: 45.6372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1812 | 5.0 | 500 | 0.5530 | 45.6372 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
amoselberg/pyramidsRND
|
amoselberg
| null | 12 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
| false | true | true | 832 |
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: amoselberg/pyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
TitanDiffuse108/EpiCentre
|
TitanDiffuse108
| null | 3 | 0 | null | 0 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 5,303 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
# Model Details
## Model Description
hello everyone this is my first model ,its free so feel free to use it . EpiCentre was made by merging two models ,this model produces contrasting colours with amazing details with beautiful faces
you can use this model anywhere you want
<!-- Provide a longer summary of what this model is. -->
- **Developed by:*BETTER THAN NOTHING(CAPTIANTITAN)*
- **Shared by [optional]:** [More Information Needed]
- **Model type:*text to image* [More Information Needed]
- **Language(s) (NLP):*English* [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:*deleberate, realastic vision* [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
this model is to be used to only generate images which are safe to the community
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
this model can be used without any filter ,please create the image humanly
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
EdenYav/Reinforce-1
|
EdenYav
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
dotunadegbite/Reinforce-CartPole-v1
|
dotunadegbite
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
quaizarv/Reinforce-PixelCopter
|
quaizarv
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
mili7522/ppo-SnowballTarget
|
mili7522
| null | 20 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 855 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: mili7522/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Alsebay/Chilloutmix-Ni-fix
|
Alsebay
| null | 4 | 0 | null | 0 | null | false | false | false |
creativeml-openrail-m
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 252 |
It a fix version of Chilloutmix-Ni because missing some Clip).
- WARNING!!!!!!!!!! Fix version may make worse than the not fix one.
Here is the main link:
- https://civitai.com/models/6424/chilloutmix
- https://huggingface.co/TASUKU2023/Chilloutmix
|
Azher/Anything-v4.5-vae-fp16-ckpt
|
Azher
| null | 3 | 0 | null | 0 | null | false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 343 |
# Model: Anything v4.5
Has the following properties that are bundled right out of the box:
- Included: vae
- Half-precision floating point format: fp16
# Model Sample Outputs
<p align="center">
<img src="https://huggingface.co/Azher/Anything-v4.5-vae-fp16-diffuser/resolve/main/Image%201.png" alt="Vampire" width="300" height="300" style="display:inline-block;">
<img src="https://huggingface.co/Azher/Anything-v4.5-vae-fp16-diffuser/resolve/main/Image%202.png" alt="Vampire" width="300" height="300" style="display:inline-block;">
<img src="https://huggingface.co/Azher/Anything-v4.5-vae-fp16-diffuser/resolve/main/Image%203.png" alt="Vampire" width="300" height="300" style="display:inline-block;">
<img src="https://huggingface.co/Azher/Anything-v4.5-vae-fp16-diffuser/resolve/main/Image%204.png" alt="Vampire" width="300" height="300" style="display:inline-block;">
</p>
Output Information:
- Prompt:
```
beautiful, masterpiece, black dress, black hair, red eyes, pale, 1girl, stunning, black collar choker, jeweled earrings
```
- Negative Prompt:
```
lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, nsfw
```
- Setup:
```
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 11, Size: 512x512
```
# Model Sources
- **Original FP16 Model:** [https://huggingface.co/andite/anything-v4.0/blob/main/anything-v4.5-pruned-fp16.ckpt](https://huggingface.co/andite/anything-v4.0/blob/main/anything-v4.5-pruned-fp16.ckpt)
- **vae swap:** [https://huggingface.co/andite/anything-v4.0/blob/main/anything-v4.0.vae.pt](https://huggingface.co/andite/anything-v4.0/blob/main/anything-v4.0.vae.pt)
|
pfunk/Pong-v4-DQPN_p500-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 1,955 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p500.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p500]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p500 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p500-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p500-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p500-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p500 --start-policy-f 500000 --end-policy-f 500000 --evaluation-fraction 1.00 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 500000,
'env_id': 'Pong-v4',
'evaluation_fraction': 1.0,
'exp_name': 'DQPN_p500',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 500000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pittawat/poca-SoccerTwos
|
pittawat
| null | 20 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 842 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: pittawat/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
tomaccer/flan-t5-base-juraqanda
|
tomaccer
|
t5
| 13 | 0 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,792 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-juraqanda
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0784
- Rouge1: 9.5491
- Rouge2: 1.4927
- Rougel: 8.828
- Rougelsum: 9.2708
- Gen Len: 18.5260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 4.0303 | 1.0 | 712 | 3.3466 | 9.4455 | 1.2684 | 8.8558 | 9.1832 | 18.7577 |
| 3.6049 | 2.0 | 1424 | 3.1931 | 10.0714 | 1.4116 | 9.4163 | 9.8024 | 18.6461 |
| 3.3464 | 3.0 | 2136 | 3.1246 | 9.6542 | 1.4317 | 8.9441 | 9.36 | 18.5485 |
| 3.2831 | 4.0 | 2848 | 3.0910 | 9.6676 | 1.4584 | 8.9533 | 9.3876 | 18.6706 |
| 3.2176 | 5.0 | 3560 | 3.0784 | 9.5491 | 1.4927 | 8.828 | 9.2708 | 18.5260 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
lora-library/margret-stalizburg-lora-test2
|
lora-library
| null | 71 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
| false | true | true | 502 |
# LoRA DreamBooth - margret-stalizburg-v1-lora
These are LoRA adaption weights for [andite/anything-v4.0](https://huggingface.co/andite/anything-v4.0). The weights were trained on the instance prompt "margret stalizburg" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: margret stalizburg




|
mili7522/ppo-Pyramids
|
mili7522
| null | 16 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
| false | true | true | 831 |
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: mili7522/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
mqy/mt5-small-finetuned-12feb-1
|
mqy
|
mt5
| 17 | 0 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 1,904 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-12feb-1
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4285
- Rouge1: 18.23
- Rouge2: 5.42
- Rougel: 18.09
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 3.0346 | 1.0 | 311 | 2.4880 | 17.19 | 5.28 | 17.06 |
| 2.8943 | 2.0 | 622 | 2.4751 | 17.77 | 5.18 | 17.59 |
| 2.8397 | 3.0 | 933 | 2.4719 | 17.65 | 5.38 | 17.55 |
| 2.806 | 4.0 | 1244 | 2.4614 | 18.26 | 5.23 | 18.03 |
| 2.7842 | 5.0 | 1555 | 2.4464 | 18.08 | 5.51 | 17.96 |
| 2.7855 | 6.0 | 1866 | 2.4437 | 17.9 | 5.37 | 17.8 |
| 2.7796 | 7.0 | 2177 | 2.4270 | 18.07 | 5.38 | 17.95 |
| 2.7951 | 8.0 | 2488 | 2.4267 | 17.96 | 5.36 | 17.85 |
| 2.7864 | 9.0 | 2799 | 2.4285 | 18.23 | 5.42 | 18.09 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
antonellaavad/mistermango24-margret-stalizburg-zp92-dreambooth-v1-0
|
antonellaavad
| null | 71 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
| false | true | true | 502 |
# LoRA DreamBooth - margret-stalizburg-v1-lora
These are LoRA adaption weights for [andite/anything-v4.0](https://huggingface.co/andite/anything-v4.0). The weights were trained on the instance prompt "margret stalizburg" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: margret stalizburg




|
antonellaavad/https-huggingface-co-mistermango24-margret-stalizburg-zp92-dreambooth-v1-0
|
antonellaavad
| null | 71 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
| false | true | true | 502 |
# LoRA DreamBooth - margret-stalizburg-v1-lora
These are LoRA adaption weights for [andite/anything-v4.0](https://huggingface.co/andite/anything-v4.0). The weights were trained on the instance prompt "margret stalizburg" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: margret stalizburg




|
antonellaavad/mistermango-has-a-test
|
antonellaavad
| null | 161 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
| false | true | true | 510 |
# LoRA DreamBooth - margret-stalizburg-lora-test-3
These are LoRA adaption weights for [Linaqruf/anything-v3.0](https://huggingface.co/Linaqruf/anything-v3.0). The weights were trained on the instance prompt "margret stalizburg" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: margret stalizburg




|
mili7522/ppo-PyramidsRND
|
mili7522
| null | 16 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
| false | true | true | 834 |
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: mili7522/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
xiazeng/poca-SoccerTwos
|
xiazeng
| null | 20 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 841 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: xiazeng/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
zxc12138/pegasus-samsum
|
zxc12138
|
pegasus
| 13 | 0 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,240 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6928 | 0.54 | 500 | 1.4812 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
harshadbhatia/LunarLander-v2-ppo
|
harshadbhatia
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
darkvibes/lizzyflex
|
darkvibes
| null | 19 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 420 |
### lizzyflex Dreambooth model trained by darkvibes with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
vumichien/AnimeGANv2_Hayao
|
vumichien
| null | 3 | 0 | null | 0 | null | false | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AnimeGanv2']
| false | true | true | 678 |
## Model Description
Transforming photos of real-world scenes into anime style images is a meaningful and challenging task in terms of computer vision and artistic style transfer.
AnimeGANv2_Haya Made by Asher Chan.
The official code in [here](https://github.com/TachibanaYoshino/AnimeGANv2)
## License
This repo is made freely available to academic and
non-academic entities for non-commercial purposes such
as academic research, teaching, scientific publications.
Permission is granted to use the AnimeGAN given
that you agree to my license terms. Regarding the
request for commercial use, please contact us via
email to help you obtain the authorization letter.
|
vumichien/AnimeGANv3_PortraitSketch
|
vumichien
| null | 3 | 0 | null | 0 | null | false | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AnimeGanv3']
| false | true | true | 687 |
## Model Description
Transforming photos of real-world scenes into anime style images is a meaningful and challenging task in terms of computer vision and artistic style transfer.
AnimeGANv3_PortraitSketch Made by Asher Chan.
The official code in [here](https://github.com/TachibanaYoshino/AnimeGANv2)
## License
This repo is made freely available to academic and
non-academic entities for non-commercial purposes such
as academic research, teaching, scientific publications.
Permission is granted to use the AnimeGAN given
that you agree to my license terms. Regarding the
request for commercial use, please contact us via
email to help you obtain the authorization letter.
|
vumichien/AnimeGANv2_Paprika
|
vumichien
| null | 3 | 0 | null | 0 | null | false | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AnimeGanv2']
| false | true | true | 680 |
## Model Description
Transforming photos of real-world scenes into anime style images is a meaningful and challenging task in terms of computer vision and artistic style transfer.
AnimeGANv2_Paprika Made by Asher Chan.
The official code in [here](https://github.com/TachibanaYoshino/AnimeGANv2)
## License
This repo is made freely available to academic and
non-academic entities for non-commercial purposes such
as academic research, teaching, scientific publications.
Permission is granted to use the AnimeGAN given
that you agree to my license terms. Regarding the
request for commercial use, please contact us via
email to help you obtain the authorization letter.
|
Ransaka/ppo-SnowballTarget
|
Ransaka
| null | 20 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 854 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: Ransaka/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
ahng79/ppo-Huggy
|
ahng79
| null | 32 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 817 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ahng79/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
vumichien/AnimeGANv2_Shinkai
|
vumichien
| null | 3 | 0 | null | 0 | null | false | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AnimeGanv2']
| false | true | true | 682 |
## Model Description
Transforming photos of real-world scenes into anime style images is a meaningful and challenging task in terms of computer vision and artistic style transfer.
AnimeGANv2_Shinkai Made by Asher Chan.
The official code in [here](https://github.com/TachibanaYoshino/AnimeGANv2)
## License
This repo is made freely available to academic and
non-academic entities for non-commercial purposes such
as academic research, teaching, scientific publications.
Permission is granted to use the AnimeGAN given
that you agree to my license terms. Regarding the
request for commercial use, please contact us via
email to help you obtain the authorization letter.
|
vumichien/AnimeGANv3_JP_face
|
vumichien
| null | 3 | 0 | null | 0 | null | false | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AnimeGanv3']
| false | true | true | 681 |
## Model Description
Transforming photos of real-world scenes into anime style images is a meaningful and challenging task in terms of computer vision and artistic style transfer.
AnimeGANv3_JP_face Made by Asher Chan.
The official code in [here](https://github.com/TachibanaYoshino/AnimeGANv2)
## License
This repo is made freely available to academic and
non-academic entities for non-commercial purposes such
as academic research, teaching, scientific publications.
Permission is granted to use the AnimeGAN given
that you agree to my license terms. Regarding the
request for commercial use, please contact us via
email to help you obtain the authorization letter.
|
acesanddiamonds/ppo-Huggy
|
acesanddiamonds
| null | 32 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 826 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: acesanddiamonds/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Deysi/mt5-small-sumarizacion-textos-bilingual
|
Deysi
|
mt5
| 9 | 0 |
transformers
| 0 |
text2text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,654 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Deysi/mt5-small-sumarizacion-textos-bilingual
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.1454
- Validation Loss: 3.3754
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2282 | 4.6664 | 0 |
| 6.0978 | 3.8777 | 1 |
| 5.2791 | 3.6299 | 2 |
| 4.8386 | 3.5296 | 3 |
| 4.5569 | 3.4565 | 4 |
| 4.3616 | 3.4055 | 5 |
| 4.2154 | 3.3870 | 6 |
| 4.1454 | 3.3754 | 7 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
yl131/ppo-Huggy
|
yl131
| null | 32 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 816 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: yl131/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
gbarcik/ppo-LundarLander-v2
|
gbarcik
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
girinlp-i2i/generic_ner_model
|
girinlp-i2i
|
bert
| 16 | 0 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,518 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# generic_ner_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0999
- Precision: 0.8727
- Recall: 0.8953
- F1: 0.8838
- Accuracy: 0.9740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1083 | 1.0 | 1958 | 0.1007 | 0.8684 | 0.8836 | 0.8759 | 0.9723 |
| 0.0679 | 2.0 | 3916 | 0.0977 | 0.8672 | 0.8960 | 0.8813 | 0.9738 |
| 0.0475 | 3.0 | 5874 | 0.0999 | 0.8727 | 0.8953 | 0.8838 | 0.9740 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
fathyshalab/domain_transfer_clinic_credit_cards-massive_social-roberta-large-v1-1-5
|
fathyshalab
|
roberta
| 14 | 0 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,532 |
# fathyshalab/domain_transfer_clinic_credit_cards-massive_social-roberta-large-v1-1-5
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_clinic_credit_cards-massive_social-roberta-large-v1-1-5")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Martha-987/whisper-small-ArabicMartha
|
Martha-987
|
whisper
| 16 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ar']
|
['Martha-987/MyOwnData']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['hf-asr-leaderboard', 'generated_from_trainer']
| true | true | true | 1,288 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Ar- Martha
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the MyOwnData dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4467
- Wer: 47.4812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.276 | 2.54 | 1000 | 0.4467 | 47.4812 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
fathyshalab/domain_transfer_clinic_credit_cards-massive_transport-roberta-large-v1-1-5
|
fathyshalab
|
roberta
| 14 | 0 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,538 |
# fathyshalab/domain_transfer_clinic_credit_cards-massive_transport-roberta-large-v1-1-5
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_clinic_credit_cards-massive_transport-roberta-large-v1-1-5")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
gbarcik/q-FrozenLake-v1-4x4-noSlippery
|
gbarcik
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 396 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="gbarcik/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Seyfelislem/arabic_whisper_small_version_2
|
Seyfelislem
|
whisper
| 14 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ar']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,413 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabic_whisper_small_version_2
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3355
- Wer: 44.5616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0835 | 0.42 | 1000 | 0.3859 | 49.2094 |
| 0.1607 | 0.83 | 2000 | 0.3481 | 46.8938 |
| 0.08 | 1.25 | 3000 | 0.3355 | 44.5616 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
fathyshalab/domain_transfer_clinic_credit_cards-massive_social-roberta-large-v1-2-5
|
fathyshalab
|
roberta
| 14 | 0 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,532 |
# fathyshalab/domain_transfer_clinic_credit_cards-massive_social-roberta-large-v1-2-5
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_clinic_credit_cards-massive_social-roberta-large-v1-2-5")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
nhiro3303/Reinforce-CartPole-v1
|
nhiro3303
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Duskfallcrew/10-minute-grumpy-hour
|
Duskfallcrew
| null | 21 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 1,036 |
[](https://huggingface.co/spaces/Duskfallcrew/10-minute-grumpy-hour)
### 10 Minute Grumpy Hour Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
This model is horrifying.
I'm not responsible if it gives you rabies . XD
prilosecotc1 (use that on your prompt)
burgie (use that on your prompt)
|
yjoon/xlm-roberta-base-finetuned-panx-de-fr
|
yjoon
|
xlm-roberta
| 9 | 0 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,318 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6706
- F1: 0.6245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 192
- eval_batch_size: 192
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 11 | 1.1624 | 0.1327 |
| No log | 2.0 | 22 | 0.7871 | 0.4940 |
| No log | 3.0 | 33 | 0.6706 | 0.6245 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.10.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
lilouuch/Goodreads_Books_Reviews_Roberta_52
|
lilouuch
|
roberta
| 6 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,343 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Goodreads_Books_Reviews_Roberta_52
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8592
- F1: 0.5986
- Accuracy: 0.6349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:--------:|
| 0.8824 | 1.0 | 25313 | 0.8754 | 0.5792 | 0.6254 |
| 0.8127 | 2.0 | 50626 | 0.8592 | 0.5986 | 0.6349 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
pnparam/PNP_dys_asr_960h
|
pnparam
|
wav2vec2
| 16 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,357 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PNP_dys_asr_960h
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7169
- Wer: 1.4123
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.4668 | 1.63 | 500 | 2.6987 | 1.0226 |
| 2.0533 | 3.26 | 1000 | 1.0528 | 2.4236 |
| 0.4828 | 4.89 | 1500 | 0.7560 | 1.3358 |
| 0.1604 | 6.51 | 2000 | 0.7169 | 1.4123 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.13.1+cu116
- Datasets 1.18.3
- Tokenizers 0.13.2
|
helpingstar/q-FrozenLake-v1-4x4-noSlippery
|
helpingstar
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 400 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="helpingstar/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mRoszak/PandaReach
|
mRoszak
| null | 11 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachJointsDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 370 |
# **A2C** Agent playing **PandaReachJointsDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachJointsDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
helpingstar/q-Taxi-v3-v1
|
helpingstar
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 370 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="helpingstar/q-Taxi-v3-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
alexcasq/OUTPUT
|
alexcasq
| null | 33 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
| false | true | true | 363 |
# LoRA DreamBooth - alexcasq/OUTPUT
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks alexcasq using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




|
KnutJaegersberg/topic-classification-IPTC-subject-labels
|
KnutJaegersberg
|
xlm-roberta
| 13 | 0 |
sentence-transformers
| 4 |
text-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentence-transformers', 'transformers', 'SetFit', 'News']
| false | true | true | 1,779 |
# IPTC topic classifier (multilingual)
A SetFit model fit on 166 downlsampled multilingual IPTC Subject labels (concatenated for the lowest hierarchy level into artificial sentences of keywords) to predict the mid level news categories.
The purpose of this classifier is to support exploring corpora as weak labeler, since the representations of these descriptions are only approximations of real documents from those topics.
The dataset I used to train the model is based on this file:
https://huggingface.co/datasets/KnutJaegersberg/News_topics_IPTC_codes_long
Accuracy on highest level labels in eval:
0.9779412
Accuracy/F1/mcc on mid level labels in eval:
0.6992481/0.6666667/0.6992617
More interestingly, I used the kaggle dataset with headlines from huffington post and manually selected 15 overlapping high level categories to evaluate the performance.
https://www.kaggle.com/datasets/rmisra/news-category-dataset
While mcc 0.1968043 on this dataset does not sound as good as before, the mistakes usually could also be seen as a re-interpretation. I.e. news on arrests where categorized as entertainment in the huffington post dataset, the classifier put it into the crime category.
My current impression is this system is useful for the aimed for purpose.
The numeric categories can be joined with the labels by using this table:
https://huggingface.co/datasets/KnutJaegersberg/IPTC-topic-classifier-labels
Looks like try out api box to the right by huggingface does not yet handle setfit models, can't do anything about that.
Use like any other SetFit model
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("KnutJaegersberg/IPTC-classifier-ml")
# Run inference
preds = model(["Rachel Dolezal Faces Felony Charges For Welfare Fraud", "Elon Musk just got lucky", "The hype on AI is different from the hype on other tech topics"])
|
helpingstar/q-Taxi-v3-v2
|
helpingstar
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 370 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="helpingstar/q-Taxi-v3-v2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mwissing/Reinforce-cartpole-v1
|
mwissing
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Gatozu35/tortoise-tts
|
Gatozu35
| null | 8 | 0 | null | 1 |
text-to-speech
| false | false | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-speech', 'audio']
| false | true | true | 6,206 |
# Model Card for TorToiSe
<!-- Provide a quick summary of what the model is/does. [Optional] -->
Tortoise is a text-to-speech program built with the following priorities:
1. Strong multi-voice capabilities.
2. Highly realistic prosody and intonation.
# Table of Contents
- [Model Card for TorToiSe](#model-card-for--model_id-)
- [Table of Contents](#table-of-contents)
- [Table of Contents](#table-of-contents-1)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Out-of-Scope Use](#out-of-scope-use)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [Recommendations](#recommendations)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Speeds, Sizes, Times](#speeds-sizes-times)
- [Evaluation](#evaluation)
- [Testing Data, Factors & Metrics](#testing-data-factors--metrics)
- [Testing Data](#testing-data)
- [Factors](#factors)
- [Metrics](#metrics)
- [Results](#results)
- [Model Examination](#model-examination)
- [Environmental Impact](#environmental-impact)
- [Model Architecture and Objective](#model-architecture-and-objective)
- [Compute Infrastructure](#compute-infrastructure)
- [Hardware](#hardware)
- [Software](#software)
- [Citation](#citation)
- [Model Card Contact](#model-card-contact)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
Tortoise is a text-to-speech program built with the following priorities:
1. Strong multi-voice capabilities.
2. Highly realistic prosody and intonation.
- **Developed by:** James Betker
- **Model type:** Language model
- **Language(s) (NLP):** en
- **License:** apache-2.0
- **Resources for more information:**
- [GitHub Repo](https://github.com/152334H/tortoise-tts-fast)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
More information on training data needed
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
More information needed
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
More information needed
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
More information needed
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
More information needed
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
Betker, J. (2022). TorToiSe text-to-speech (Version 2.0) [Computer software]. https://github.com/neonbjb/tortoise-tts
**APA:**
@software{Betker_TorToiSe_text-to-speech_2022,
author = {Betker, James},
month = {4},
title = {{TorToiSe text-to-speech}},
url = {https://github.com/neonbjb/tortoise-tts},
version = {2.0},
year = {2022}
}
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
Gatozu35
# Model Card Contact
Use the discussion tab
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
More information needed
</details>
|
chavicoski/poca-SoccerTwos
|
chavicoski
| null | 20 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 844 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: chavicoski/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Rowehn/poca-SoccerTwos-final
|
Rowehn
| null | 20 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 846 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Rowehn/poca-SoccerTwos-final
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
AnnihilationOperator/ofa-huge-caption
|
AnnihilationOperator
|
ofa
| 6 | 0 |
transformers
| 0 | null | true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,728 |
# OFA-huge-caption
This is the **huge** version of OFA pretrained model finetuned on COCO captioning task, forked & converted from the [original fairseq version](https://ofa-beijing.oss-cn-beijing.aliyuncs.com/checkpoints/caption_huge_best.pt) and compressed into float16.
The conversion script is custom, but the procedure described [Issue #171](https://github.com/OFA-Sys/OFA/issues/171) should also apply (quantization is not performed, but that's trivial).
You will need a [OFA modified version of transformers](https://github.com/OFA-Sys/OFA/tree/feature/add_transformers) to use this model. No idea why it is still not in master. Tips: You can just copy-paste the `transformers` folder into your project and rename it, then monkey-patch the `transformers` module to point to your local copy to avoid having to install it.
## Original README below
## Introduction
This is the **huge** version of OFA pretrained model. OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image generation, visual grounding, image captioning, image classification, text generation, etc.) to a simple sequence-to-sequence learning framework.
The directory includes 4 files, namely `config.json` which consists of model configuration, `vocab.json` and `merge.txt` for our OFA tokenizer, and lastly `pytorch_model.bin` which consists of model weights. There is no need to worry about the mismatch between Fairseq and transformers, since we have addressed the issue yet.
## How to use
To use it in transformers, please refer to <https://github.com/OFA-Sys/OFA/tree/feature/add_transformers>. Install the transformers and download the models as shown below.
```bash
git clone --single-branch --branch feature/add_transformers https://github.com/OFA-Sys/OFA.git
pip install OFA/transformers/
git clone https://huggingface.co/OFA-Sys/OFA-huge
```
After, refer the path to OFA-huge to `ckpt_dir`, and prepare an image for the testing example below. Also, ensure that you have pillow and torchvision in your environment.
```python
>>> from PIL import Image
>>> from torchvision import transforms
>>> from transformers import OFATokenizer, OFAModel
>>> from generate import sequence_generator
>>> mean, std = [0.5, 0.5, 0.5], [0.5, 0.5, 0.5]
>>> resolution = 480
>>> patch_resize_transform = transforms.Compose([
lambda image: image.convert("RGB"),
transforms.Resize((resolution, resolution), interpolation=Image.BICUBIC),
transforms.ToTensor(),
transforms.Normalize(mean=mean, std=std)
])
>>> tokenizer = OFATokenizer.from_pretrained(ckpt_dir)
>>> txt = " what does the image describe?"
>>> inputs = tokenizer([txt], return_tensors="pt").input_ids
>>> img = Image.open(path_to_image)
>>> patch_img = patch_resize_transform(img).unsqueeze(0)
# using the generator of fairseq version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=True)
>>> generator = sequence_generator.SequenceGenerator(
tokenizer=tokenizer,
beam_size=5,
max_len_b=16,
min_len=0,
no_repeat_ngram_size=3,
)
>>> data = {}
>>> data["net_input"] = {"input_ids": inputs, 'patch_images': patch_img, 'patch_masks':torch.tensor([True])}
>>> gen_output = generator.generate([model], data)
>>> gen = [gen_output[i][0]["tokens"] for i in range(len(gen_output))]
# using the generator of huggingface version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=False)
>>> gen = model.generate(inputs, patch_images=patch_img, num_beams=5, no_repeat_ngram_size=3)
>>> print(tokenizer.batch_decode(gen, skip_special_tokens=True))
```
|
MPSTME/swin-tiny-patch4-window7-224-finetuned-skin-cancer
|
MPSTME
|
swin
| 10 | 0 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,071 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-skin-cancer
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
PeterBanning71/t5-small-finetuned-eLife
|
PeterBanning71
|
t5
| 14 | 0 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 1,576 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-eLife
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8960
- Rouge1: 14.7239
- Rouge2: 2.8698
- Rougel: 11.0202
- Rougelsum: 13.3642
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 3.3558 | 1.0 | 544 | 2.9587 | 13.7915 | 2.6556 | 10.3265 | 12.5097 | 19.0 |
| 3.1299 | 2.0 | 1088 | 2.9079 | 14.7136 | 2.7492 | 10.836 | 13.3664 | 19.0 |
| 3.0917 | 3.0 | 1632 | 2.8960 | 14.7239 | 2.8698 | 11.0202 | 13.3642 | 19.0 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Lucetepolis/OctaFuzz
|
Lucetepolis
| null | 12 | 0 |
diffusers
| 3 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
| false | true | true | 4,192 |
# HXDC
Counterfeit-V2.5 - <a href="https://huggingface.co/gsdf/Counterfeit-V2.5">Download</a><br/>
Treebark - <a href="https://huggingface.co/HIZ/aichan_pick">Download</a><br/>
HyperBomb, FaceBomb - <a href="https://huggingface.co/mocker/KaBoom">Download</a><br/>
qwerty - <a href="https://huggingface.co/1q2W3e/qwerty">Download</a><br/>
ultracolor.v4 - <a href="https://huggingface.co/xdive/ultracolor.v4">Download</a><br/>
donko-mix-hard - <a href="https://civitai.com/models/7037/donko-mix-nsfw-hard">Download</a><br/>
OrangePastelV2 - ~~Download~~ Currently not available.<br/>
smix 1.12121 - <a href="https://civitai.com/models/8019/smix-1-series">Download</a><br/>
viewer-mix - <a href="https://civitai.com/models/7813/viewer-mix">Download</a><br/>
0012-half - <a href="https://huggingface.co/1q2W3e/Attached-model_collection">Download</a><br/>
Null v2.2 - <a href="https://civitai.com/models/8173/null-v22">Download</a><br/>
school anime - <a href="https://civitai.com/models/7189/school-anime">Download</a><br/>
tlqkfniji7 - <a href="https://huggingface.co/uiouiouio/The_lovely_quality_kahlua_flavour">Download</a><br/>
7th_anime_v3_B - <a href="https://huggingface.co/syaimu/7th_Layer">Download</a><br/>
Crowbox-Vol.1 - <a href="https://huggingface.co/kf1022/Crowbox-Vol.1">Download</a><br/>
EasyNegative and pastelmix-lora seem to work well with the models.
EasyNegative - <a href="https://huggingface.co/datasets/gsdf/EasyNegative">Download</a><br/>
pastelmix-lora - <a href="https://huggingface.co/andite/pastel-mix">Download</a>
# Formula
```
Counterfeit-V2.5 + Treebark = ct
base_alpha = 0.009901
Weight values = 0.259221, 0.094699, 0.186355, 0.344377, 0.54691, 0.535689, 0.526122, 0.420305, 0.312004, 0.40172, 0.452608, 0.481439, 0.029126, 0.492655, 0.478894, 0.443794, 0.284518, 0.24424, 0.284451, 0.382469, 0.282082, 0.18387, 0.126064, 0.113941, 0.103878
ct + HyperBomb = cth
base_alpha = 0.09009
Weight values = 0.208912, 0.290962, 0.44034, 0.426141, 0.294959, 0.258193, 0.279347, 0.219226, 0.100589, 0.076065, 0.061552, 0.053125, 0.225564, 0.013679, 0.029582, 0.067917, 0.209599, 0.238881, 0.209736, 0.097528, 0.143293, 0.18856, 0.227611, 0.336235, 0.40562
cth + qwerty = cthq
base_alpha = 0.008929
Weight values = 0.298931, 0.286255, 0.185812, 0.136147, 0.100038, 0.09741, 0.069466, 0.065465, 0.099956, 0.218813, 0.27544, 0.304705, 0.184049, 0.021782, 0.051109, 0.115061, 0.291535, 0.319518, 0.291441, 0.197459, 0.295056, 0.359111, 0.375537, 0.264379, 0.170006
cthq + ultracolor.v4 = cthqu
base_alpha = 0.081967
Weight values = 0.044348, 0.051224, 0.092643, 0.0896, 0.047055, 0.03864, 0.032217, 0.034381, 0.032329, 0.017, 0.009525, 0.005618, 0.380228, 0.060561, 0.083015, 0.128444, 0.233262, 0.247876, 0.234218, 0.103302, 0.082694, 0.111921, 0.235504, 0.634374, 0.746614
cthqu + FaceBomb = cthquf
base_alpha = 0.45045
Weight values = 0.304652, 0.108189, 0.113682, 0.116402, 0.118828, 0.11284, 0.095841, 0.065612, 0.035945, 0.033428, 0.032195, 0.03155, 0.03663, 0.006005, 0.008193, 0.012592, 0.022593, 0.023941, 0.02257, 0.019395, 0.027618, 0.032024, 0.029911, 0.015144, 0.010908
cthquf + donko-mix-hard = cthqufd
base_alpha = 0.310559
Weight values = 0.041071, 0.033818, 0.035788, 0.036933, 0.038236, 0.037834, 0.040386, 0.045727, 0.049152, 0.025509, 0.0135, 0.007091, 0.035336, 0.009262, 0.016837, 0.031714, 0.063923, 0.068124, 0.063941, 0.051919, 0.076044, 0.091518, 0.094579, 0.081523, 0.077707
cthqufd + OrangePastelV2 = OctaFuzz
base_alpha = 0.03012
Weight values = 0.045454, 0.044635, 0.071192, 0.078145, 0.074833, 0.072486, 0.069609, 0.08331, 0.082494, 0.043373, 0.022197, 0.010507, 0.03413, 0.009176, 0.016555, 0.030733, 0.06007, 0.063741, 0.059989, 0.049022, 0.069114, 0.078421, 0.07162, 0.029375, 0.016293
smix 1.12121 + viewer-mix = sv
base_alpha = 0.230769
Weight values = 0.395271, 0.35297, 0.359395, 0.382984, 0.448508, 0.468333, 0.478042, 0.475167, 0.419157, 0.446681, 0.469808, 0.48688, 0.230769, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5
sv + 0012-half = sv0
base_alpha = 0.434783
Weight values = 0.096641, 0.097719, 0.100011, 0.105301, 0.118931, 0.122252, 0.120899, 0.11391, 0.15397, 0.407393, 0.526559, 0.587752, 0.071429, 0.326817, 0.315594, 0.291682, 0.229445, 0.220024, 0.229364, 0.30164, 0.31157, 0.309196, 0.281226, 0.145209, 0.089865
sv0 + Null v2.2 = sv0n
base_alpha = 0.115385
Weight values = 0.132862, 0.1371, 0.108727, 0.104247, 0.117468, 0.122796, 0.131157, 0.14836, 0.213205, 0.184383, 0.170088, 0.16255, 0.176471, 0.013049, 0.029363, 0.062385, 0.138653, 0.149139, 0.138776, 0.119286, 0.183455, 0.228237, 0.255516, 0.296091, 0.311362
sv0n + school anime = sv0ns
base_alpha = 0.103448
Weight values = 0.087455, 0.088646, 0.114848, 0.110151, 0.070954, 0.064852, 0.054146, 0.06643, 0.083591, 0.111871, 0.125259, 0.132157, 0.055556, 0.014513, 0.032747, 0.067662, 0.139412, 0.148332, 0.139177, 0.054834, 0.040531, 0.031203, 0.02771, 0.029855, 0.03066
sv0ns + tlqkfniji7 = sv0nst
base_alpha = 0.25641
Weight values = 0.366264, 0.082457, 0.061703, 0.0743, 0.128699, 0.132356, 0.090334, 0.073644, 0.120288, 0.066093, 0.038035, 0.022911, 0.016393, 0.010271, 0.010979, 0.012331, 0.015099, 0.015235, 0.014313, 0.006851, 0.005245, 0.005269, 0.008194, 0.021708, 0.026685
sv0nst + 7th_anime_v3_B = sv0nst7
base_alpha = 0.025
Weight values = 0.270768, 0.082819, 0.089464, 0.099695, 0.122101, 0.11876, 0.079592, 0.057662, 0.096981, 0.056373, 0.033881, 0.021306, 0.016129, 0.004163, 0.005616, 0.008379, 0.013987, 0.01468, 0.013977, 0.00666, 0.004674, 0.003356, 0.002823, 0.002944, 0.002989
sv0nst7 + Crowbox-Vol.1 = OctaBlend
base_alpha = 0.007444
Weight values = 0.036592, 0.028764, 0.033246, 0.051828, 0.096045, 0.099435, 0.054162, 0.020355, 0.01281, 0.027376, 0.035261, 0.039613, 0.005348, 0.029654, 0.026405, 0.020164, 0.00725, 0.005724, 0.007621, 0.016328, 0.014867, 0.025298, 0.058555, 0.172774, 0.208144
OctaFuzz + OctaBlend = HXDC
base_alpha = 0.5
Weight values = 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5
```
# Converted weights



# Samples
All of the images use following negatives/settings. EXIF preserved.
```
Negative prompt: (worst quality, low quality:1.4), EasyNegative, bad anatomy, bad hands, error, missing fingers, extra digit, fewer digits
Steps: 28, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 768x512, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires upscale: 1.5, Hires steps: 14, Hires upscaler: Latent (nearest-exact)
```
# OctaFuzz








# OctaBlend








# HXDC








|
Beegbrain/a2c-AntBulletEnv-v0-2
|
Beegbrain
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
frankenstyle/q-FrozenLake-v1-4x4-noSlippery
|
frankenstyle
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 401 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="frankenstyle/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
lmqg/flan-t5-base-squad-qag
|
lmqg
|
t5
| 13 | 0 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
|
['en']
|
['lmqg/qag_squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['questions and answers generation']
| true | true | true | 3,881 |
# Model Card of `lmqg/flan-t5-base-squad-qag`
This model is fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) for question & answer pair generation task on the [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/flan-t5-base](https://huggingface.co/google/flan-t5-base)
- **Language:** en
- **Training data:** [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/flan-t5-base-squad-qag")
# model prediction
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/flan-t5-base-squad-qag")
output = pipe("generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/flan-t5-base-squad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_squad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 93.04 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedF1Score (MoverScore) | 65.24 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedPrecision (BERTScore) | 93.1 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedPrecision (MoverScore) | 65.91 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedRecall (BERTScore) | 92.99 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedRecall (MoverScore) | 64.7 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qag_squad
- dataset_name: default
- input_types: ['paragraph']
- output_types: ['questions_answers']
- prefix_types: ['qag']
- model: google/flan-t5-base
- max_length: 512
- max_length_output: 256
- epoch: 14
- batch: 8
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 8
- label_smoothing: 0.0
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/flan-t5-base-squad-qag/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
johannes5117/kadoa-page-extraction
|
johannes5117
|
t5
| 12 | 0 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,354 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kadoa-page-extraction
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 0.8235 |
| No log | 2.0 | 2 | 0.8235 |
| No log | 3.0 | 3 | 0.8235 |
| No log | 4.0 | 4 | 0.8235 |
| No log | 5.0 | 5 | 0.8235 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
frankenstyle/q-taxi-v3
|
frankenstyle
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 368 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="frankenstyle/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Brainergy/ppiittuuffoo
|
Brainergy
| null | 16 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 423 |
### ppiittuuffoo Dreambooth model trained by Brainergy with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
sunilSabnis/vit-mae-large-ai-or-not
|
sunilSabnis
|
vit
| 11 | 0 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,427 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-mae-large-ai-or-not
This model is a fine-tuned version of [facebook/vit-mae-large](https://huggingface.co/facebook/vit-mae-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1883
- Accuracy: 0.9683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3623 | 0.19 | 200 | 0.2099 | 0.9243 |
| 0.2465 | 0.38 | 400 | 0.4055 | 0.8545 |
| 0.2164 | 0.57 | 600 | 0.1808 | 0.9259 |
| 0.1943 | 0.76 | 800 | 0.1765 | 0.9329 |
| 0.1723 | 0.95 | 1000 | 0.2083 | 0.9313 |
| 0.118 | 1.15 | 1200 | 0.2295 | 0.9168 |
| 0.0812 | 1.34 | 1400 | 0.1600 | 0.9511 |
| 0.082 | 1.53 | 1600 | 0.1331 | 0.9624 |
| 0.0863 | 1.72 | 1800 | 0.1352 | 0.9511 |
| 0.0858 | 1.91 | 2000 | 0.1643 | 0.9506 |
| 0.056 | 2.1 | 2200 | 0.1930 | 0.9586 |
| 0.0319 | 2.29 | 2400 | 0.1595 | 0.9624 |
| 0.0206 | 2.48 | 2600 | 0.2937 | 0.9447 |
| 0.0299 | 2.67 | 2800 | 0.1680 | 0.9603 |
| 0.0213 | 2.86 | 3000 | 0.1746 | 0.9586 |
| 0.0164 | 3.05 | 3200 | 0.1579 | 0.9624 |
| 0.0019 | 3.24 | 3400 | 0.1787 | 0.9646 |
| 0.0022 | 3.44 | 3600 | 0.1976 | 0.9640 |
| 0.0023 | 3.63 | 3800 | 0.2017 | 0.9651 |
| 0.0045 | 3.82 | 4000 | 0.1883 | 0.9683 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
TieIncred/ddpm-celebahq-finetuned-butterflies-2epochs
|
TieIncred
| null | 6 | 0 |
diffusers
| 0 |
unconditional-image-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class']
| false | true | true | 347 |
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class ๐งจ](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('TieIncred/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
pmgautam/ppo-LunarLander-v2
|
pmgautam
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
spacemanidol/flan-t5-small-xsum
|
spacemanidol
|
t5
| 14 | 0 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['xsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,148 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0998
- Rouge1: 33.2675
- Rouge2: 11.0862
- Rougel: 26.1709
- Rougelsum: 26.1668
- Gen Len: 28.0123
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.12.1
|
dmitry-np/a2c-AntBulletEnv-v0
|
dmitry-np
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nikogarro/DQN-SpaceInvadersNoFrameskip-v4
|
nikogarro
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,219 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nikogarro -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nikogarro -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga nikogarro
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 160000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.05),
('exploration_fraction', 0.2),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0005),
('learning_starts', 50000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 2000),
('train_freq', 4),
('normalize', False)])
```
|
mshibatatt/Reinforce-CartPole-v1
|
mshibatatt
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
JessicaHsu/q-FrozenLake-v1-4x4-noSlippery
|
JessicaHsu
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 399 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="JessicaHsu/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
JessicaHsu/qTaxi-v3
|
JessicaHsu
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 365 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="JessicaHsu/qTaxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
augustocsc/gpt-m0
|
augustocsc
|
gpt2
| 7 | 0 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,103 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-m0
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.7384 | 0.61 | 500 | 1.6251 |
| 0.0325 | 1.22 | 1000 | 0.0146 |
| 0.0104 | 1.83 | 1500 | 0.0094 |
| 0.008 | 2.44 | 2000 | 0.0074 |
| 0.0061 | 3.05 | 2500 | 0.0058 |
| 0.0057 | 3.66 | 3000 | 0.0050 |
| 0.0059 | 4.27 | 3500 | 0.0050 |
| 0.0047 | 4.88 | 4000 | 0.0050 |
| 0.0043 | 5.49 | 4500 | 0.0045 |
| 0.0043 | 6.11 | 5000 | 0.0045 |
| 0.0036 | 6.72 | 5500 | 0.0043 |
| 0.0038 | 7.33 | 6000 | 0.0041 |
| 0.0034 | 7.94 | 6500 | 0.0044 |
| 0.0036 | 8.55 | 7000 | 0.0040 |
| 0.0032 | 9.16 | 7500 | 0.0039 |
| 0.0033 | 9.77 | 8000 | 0.0037 |
| 0.0032 | 10.38 | 8500 | 0.0036 |
| 0.0029 | 10.99 | 9000 | 0.0035 |
| 0.003 | 11.6 | 9500 | 0.0035 |
| 0.0027 | 12.21 | 10000 | 0.0036 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
akghxhs55/poca-SoccerTwos-2
|
akghxhs55
| null | 30 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 845 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: akghxhs55/poca-SoccerTwos-2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
franfram/distillbert-base-spanish-uncased-finetuned-spanish-corpus
|
franfram
|
distilbert
| 15 | 0 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,369 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distillbert-base-spanish-uncased-finetuned-spanish-corpus
This model is a fine-tuned version of [CenIA/distillbert-base-spanish-uncased](https://huggingface.co/CenIA/distillbert-base-spanish-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3776 | 1.0 | 56 | 3.7218 |
| 3.3814 | 2.0 | 112 | 3.7694 |
| 3.3348 | 3.0 | 168 | 3.7389 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
fathyshalab/domain_transfer_clinic_credit_cards-massive_transport-roberta-large-v1-2-5
|
fathyshalab
|
roberta
| 14 | 0 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,538 |
# fathyshalab/domain_transfer_clinic_credit_cards-massive_transport-roberta-large-v1-2-5
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_clinic_credit_cards-massive_transport-roberta-large-v1-2-5")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
kasrahabib/20_propogated
|
kasrahabib
|
bert
| 10 | 0 |
transformers
| 0 |
text-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,915 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kasrahabib/20_propogated
This model is a fine-tuned version of [kasrahabib/XXX08_02_23__-bucket-finetunned](https://huggingface.co/kasrahabib/XXX08_02_23__-bucket-finetunned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0504
- Validation Loss: 0.1528
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7660, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2492 | 0.1740 | 0 |
| 0.1527 | 0.1501 | 1 |
| 0.1092 | 0.1582 | 2 |
| 0.0879 | 0.1568 | 3 |
| 0.0774 | 0.1577 | 4 |
| 0.0689 | 0.1513 | 5 |
| 0.0597 | 0.1598 | 6 |
| 0.0600 | 0.1536 | 7 |
| 0.0526 | 0.1519 | 8 |
| 0.0504 | 0.1528 | 9 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
AntiSquid/DQN-SpaceInvadersNoFrameskip-v4
|
AntiSquid
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,221 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AntiSquid -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AntiSquid -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AntiSquid
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
fathyshalab/domain_transfer_clinic_credit_cards-massive_calendar-roberta-large-v1-2-88
|
fathyshalab
|
roberta
| 14 | 0 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,538 |
# fathyshalab/domain_transfer_clinic_credit_cards-massive_calendar-roberta-large-v1-2-88
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_clinic_credit_cards-massive_calendar-roberta-large-v1-2-88")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
JessicaHsu/q-Taxi-v3-1
|
JessicaHsu
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 368 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="JessicaHsu/q-Taxi-v3-1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
fathyshalab/domain_transfer_clinic_credit_cards-massive_play-roberta-large-v1-2-65
|
fathyshalab
|
roberta
| 14 | 0 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,530 |
# fathyshalab/domain_transfer_clinic_credit_cards-massive_play-roberta-large-v1-2-65
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_clinic_credit_cards-massive_play-roberta-large-v1-2-65")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Cortes48/beachdreamwongkarwai
|
Cortes48
| null | 19 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 430 |
### BeachDreamWongKarWai Dreambooth model trained by Cortes48 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
fathyshalab/domain_transfer_clinic_credit_cards-massive_datetime-roberta-large-v1-2-95
|
fathyshalab
|
roberta
| 14 | 0 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,538 |
# fathyshalab/domain_transfer_clinic_credit_cards-massive_datetime-roberta-large-v1-2-95
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_clinic_credit_cards-massive_datetime-roberta-large-v1-2-95")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
jinhu2659/Taxi-v3
|
jinhu2659
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 379 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jinhu2659/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mibalaguer/ppo-Huggy
|
mibalaguer
| null | 32 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 821 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: mibalaguer/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Beegbrain/a2c-PandaReachDense-v2
|
Beegbrain
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YoriV/ppo-SnowballTarget
|
YoriV
| null | 20 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 852 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: YoriV/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
xiaofxiong/ppo-Huggy
|
xiaofxiong
| null | 32 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 821 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: xiaofxiong/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
fathyshalab/domain_transfer_clinic_credit_cards-massive_recommendation-roberta-large-v1-2-15
|
fathyshalab
|
roberta
| 14 | 0 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,550 |
# fathyshalab/domain_transfer_clinic_credit_cards-massive_recommendation-roberta-large-v1-2-15
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_clinic_credit_cards-massive_recommendation-roberta-large-v1-2-15")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Achitha/small_data_test
|
Achitha
|
whisper
| 14 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,047 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small_data_test
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
waifuwishes/WW_LoRAs
|
waifuwishes
| null | 7 | 0 | null | 0 |
text-to-image
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'text-to-image', 'lora', 'anime']
| false | true | true | 3,508 |
# Table of Contents
- [Overview](#overview)
- [Installation](#installation)
- [Usage](#usage)
- [LoRAs](#loras)
- [SocialMedia](#socialmedia)
# Overview
Inspired by amazing work done by [Trauter](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs) I decided to make a contribution
to society by extending his work and developing new LoRAs.
I'm going to train and test models on anime checkpoints like [WarriorMama777](https://huggingface.co/WarriorMama777/OrangeMixs), [Andite](https://huggingface.co/andite/anything-v4.0),
[Gsdf](https://huggingface.co/gsdf/Counterfeit-V2.5), for that reason alone, I don't know how they will perform on your specific model.
You can find comparision grid in **[model_name]/Previews** folder.
Previews have metadata containing the prompt and settings used to create them, you can access this via "PNG Info" tab in [Automatic1111/WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
Every model is trained with [danbooru](https://danbooru.donmai.us/tags?commit=Search&search%5Bhide_empty%5D=yes&search%5Border%5D=count) tag, using [wd14-tagger](https://github.com/toriato/stable-diffusion-webui-wd14-tagger) with minor tweaks.
Additionally, every character folder contains a json file with information about [training](https://github.com/bmaltais/kohya_ss) settings used for a specific model.
As far as I can tell, there is no reason for training a model for more than 2 epochs (4000+ steps).
# Installation
Paste desired model (if you want thumbnail you can also paste preview image) into **\stable-diffusion-webui\models\Lora**
Since LoRAs are now available directly in WebUI, you can use them as presented in the following [guide](https://rentry.org/2chAI_LoRA_Dreambooth_guide_english#usage).
# Usage
I make models with **ww** prefix:
```
ww_[source_name]_[character_name]
ww_ov_widowmaker
```
The suffix in the model's name indicates the number of steps taken to train them.
More steps means more training, so they are more likely to produce images that are close to the original source, but I find the differences to be very subtle in most cases.
I wanted to somehow create flexible models. You can experiment with fewer tags by setting the LoRA weight to 1, or you may want to customize specific parts like hair type or length, clothes, breasts size, accessoriesย with lesser weight: 0.6 - 0.7
# LoRAs
- [Overwatch](#overwatch)
- [Widowmaker](#widowmaker)
- [Ashe](#ashe)
# Overwatch
- # Widowmaker
[<img src="https://huggingface.co/waifuwishes/WW_LoRAs/resolve/main/Overwatch/Widowmaker/Previews/ww_ov_widowmaker_v1_1700.png" width="512" height="768">](https://huggingface.co/waifuwishes/WW_LoRAs/resolve/main/Overwatch/Widowmaker/Previews/ww_ov_widowmaker_v1_1700.png)
<details>
<summary>Prompt</summary>
<pre>
ww_ov_widowmaker, (masterpiece:1.2), (best quality), (extremely detailed), highres, illustration, depth of field, dark intense shadows, sharp focus, soft light, (good composition), standing,
1girl, solo, small breasts, pink bodysuit, arm tattoo, center opening, headgear, colored skin, earrings, gloves, pauldrons, lips, long hair, makeup, nose, ponytail, purple hair, purple lips, purple skin, short sleeves, yellow eyes, looking at viewer, <lora:ww_ov_widowmaker_v1_1700:0.7>,
outdoors, night, detailed background
Negative prompt: EasyNegative, extra fingers,fewer fingers, username, artist name, signature, disembodied limb, extra legs, extra arms, extra fingers, bad anatomy, username, signature
Steps: 50, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 357744401, Size: 512x768, Model hash: 038ba203d8, Denoising strength: 0.5, Clip skip: 2, Hires upscale: 1.3, Hires upscaler: Latent, aesthetic_score: 6.7
</pre>
</details>
- # Ashe
[<img src="https://huggingface.co/waifuwishes/WW_LoRAs/resolve/main/Overwatch/Ashe/Previews/ww_ov_ashe_v1_1600.png" width="512" height="768">](https://huggingface.co/waifuwishes/WW_LoRAs/resolve/main/Overwatch/Ashe/Previews/ww_ov_ashe_v1_1600.png)
<details>
<summary>Prompt</summary>
<pre>
ww_ov_ashe, (masterpiece:1.2), (best quality), (extremely detailed), highres, illustration, depth of field, dark intense shadows, sharp focus, soft light, (good composition), standing,
1girl, solo, asymmetrical hair, bob cut, white hair, medium hair, cowboy hat, earrings, shoulder armor, eyeshadow, (red eyes:1.2), cowboy hat, jewelry, lipstick, makeup, mole above mouth, necktie, nose, red lips, red necktie, white shirt, vest, looking at viewer, <lora:ww_ov_ashe_v1_1600:0.7>,
outdoors, sunset, detailed background
Negative prompt: EasyNegative, extra fingers,fewer fingers, username, artist name, signature, disembodied limb, extra legs, extra arms, extra fingers, bad anatomy, username, signature
Steps: 50, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 4022578532, Size: 512x768, Model hash: 0873291ac5, Denoising strength: 0.5, Clip skip: 2, Hires upscale: 1.3, Hires upscaler: Latent, aesthetic_score: 6.8
</pre>
</details>
# SocialMedia
[Twitter](https://twitter.com/Waifu_Wishes)
[Reddit](https://www.reddit.com/user/waifu_wishes)
[Instagram](https://www.instagram.com/waifuwishes/)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.