repo_id
stringlengths 4
122
| author
stringlengths 2
38
⌀ | model_type
stringlengths 2
33
⌀ | files_per_repo
int64 2
39k
| downloads_30d
int64 0
33.7M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.87k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
33
⌀ | languages
stringlengths 2
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringlengths 6
258
⌀ | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
46
| prs_closed
int64 0
34
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 2
classes | has_text
bool 1
class | text_length
int64 201
598k
| readme
stringlengths 0
598k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
davanstrien/dataset_mentions
|
davanstrien
|
mpnet
| 13 | 13 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,422 |
# davanstrien/dataset_mentions
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("davanstrien/dataset_mentions")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
ahmad1289/ppo-SnowballTarget
|
ahmad1289
| null | 30 | 2 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 856 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: ahmad1289/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
fathyshalab/massive_transport-roberta-large-v1-2-0.15
|
fathyshalab
|
roberta
| 14 | 2 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,472 |
# fathyshalab/massive_transport-roberta-large-v1-2-0.15
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_transport-roberta-large-v1-2-0.15")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
apatidar0/conversation-summ
|
apatidar0
|
bart
| 14 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null |
['samsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,711 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# conversation-summ
This model is a fine-tuned version of [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4048
- Rouge1: 51.7796
- Rouge2: 26.1341
- Rougel: 41.4013
- Rougelsum: 41.4563
- Gen Len: 29.656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.5781 | 1.0 | 500 | 0.3637 | 50.8871 | 26.6178 | 41.8757 | 41.9291 | 25.16 |
| 0.2183 | 2.0 | 1000 | 0.3586 | 50.7919 | 25.4277 | 40.8428 | 40.8421 | 27.712 |
| 0.1354 | 3.0 | 1500 | 0.4048 | 51.7796 | 26.1341 | 41.4013 | 41.4563 | 29.656 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
lucataco/startuplogos-lora-small
|
lucataco
| null | 15 | 0 |
diffusers
| 1 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
| false | true | true | 390 |
# LoRA text2image fine-tuning - https://huggingface.co/lucataco/startuplogos-lora-supersmall
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lucataco/startuplogo-captions dataset. You can find some example images in the following.




|
Mustafa21/detr-resnet-50_finetuned_cppe5
|
Mustafa21
|
detr
| 15 | 2 |
transformers
| 0 |
object-detection
| true | false | false |
apache-2.0
| null |
['cppe-5']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 976 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
fathyshalab/massive_calendar-roberta-large-v1-2-0.89
|
fathyshalab
|
roberta
| 14 | 2 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,470 |
# fathyshalab/massive_calendar-roberta-large-v1-2-0.89
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_calendar-roberta-large-v1-2-0.89")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Elytum/my_awesome_model
|
Elytum
|
distilbert
| 10 | 19 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,275 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3322
- Accuracy: 0.9279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.3177 | 1.0 | 152679 | 0.3123 | 0.9248 |
| 0.2212 | 2.0 | 305358 | 0.3322 | 0.9279 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
fathyshalab/massive_play-roberta-large-v1-2-0.64
|
fathyshalab
|
roberta
| 14 | 2 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,462 |
# fathyshalab/massive_play-roberta-large-v1-2-0.64
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_play-roberta-large-v1-2-0.64")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
dasaprakashk/Reinforce-Pixelcopter-PLE-v0
|
dasaprakashk
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
vvn0/a2c-AntBulletEnv-v0
|
vvn0
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ahmad1289/pyramids-RND-1
|
ahmad1289
| null | 16 | 2 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
| false | true | true | 834 |
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: ahmad1289/pyramids-RND-1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
victorivus/q-FrozenLake-v1-4x4-noSlippery
|
victorivus
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 399 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="victorivus/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
pabloac31/ppo-Pyramids
|
pabloac31
| null | 18 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
| false | true | true | 832 |
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: pabloac31/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
victorivus/Q-Learner-Taxi-v3
|
victorivus
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 374 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="victorivus/Q-Learner-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
JosephSf7/en_pipeline
|
JosephSf7
| null | 25 | 2 |
spacy
| 0 |
text-classification
| false | false | false | null |
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['spacy', 'token-classification', 'text-classification']
| false | true | true | 6,178 |
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.2.3,<3.3.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `ner`, `textcat` |
| **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `ner`, `textcat` |
| **Vectors** | 684830 keys, 684830 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (197 labels for 5 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, `_SP`, ```` |
| **`morphologizer`** | `Definite=Def\|POS=DET\|PronType=Art`, `Number=Sing\|POS=NOUN`, `POS=ADP`, `Degree=Pos\|POS=ADJ`, `Number=Plur\|POS=NOUN`, `Number=Sing\|POS=PROPN`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Quot`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=AUX\|Tense=Past\|VerbForm=Part`, `Aspect=Prog\|POS=VERB\|Tense=Pres\|VerbForm=Part`, `POS=ADV`, `POS=VERB\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Aspect=Perf\|POS=VERB\|Tense=Past\|VerbForm=Part`, `POS=PART`, `POS=PUNCT\|PunctType=Comm`, `POS=PRON`, `POS=SCONJ`, `POS=VERB\|VerbForm=Inf`, `POS=PUNCT\|PunctType=Peri`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `Degree=Cmp\|POS=ADJ`, `ConjType=Cmp\|POS=CCONJ`, `Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=SPACE`, `Definite=Ind\|POS=DET\|PronType=Art`, `Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `POS=PRON\|PronType=Rel`, `Degree=Sup\|POS=ADJ`, `POS=VERB\|Tense=Pres\|VerbForm=Fin`, `POS=AUX\|VerbForm=Fin`, `POS=AUX\|VerbForm=Inf`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `NumType=Card\|POS=NUM`, `Number=Plur\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Degree=Sup\|POS=ADV`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Quot`, `POS=DET`, `Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `POS=PUNCT\|PunctType=Dash`, `Degree=Cmp\|POS=ADV`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=DET\|PronType=Dem`, `POS=AUX\|VerbForm=Ger`, `POS=AUX`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Foreign=Yes\|POS=X`, `POS=ADV\|PronType=Dem`, `POS=PART\|Polarity=Neg`, `Number=Plur\|POS=PRON\|PronType=Dem`, `POS=AUX\|Tense=Past\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Brck`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Brck`, `Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Degree=Pos\|POS=ADV`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=SYM`, `Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `POS=PUNCT`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=AUX\|VerbType=Mod`, `POS=DET\|Poss=Yes`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=INTJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `NumType=Mult\|POS=ADV`, `POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=AUX`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Definite=Ind\|POS=PRON\|PronType=Art`, `Aspect=Prog\|POS=AUX\|Tense=Pres\|VerbForm=Part`, `POS=X`, `Case=Acc\|POS=PRON\|Person=2\|PronType=Prs` |
| **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `dep`, `det`, `dobj`, `expl`, `mark`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `pcomp`, `pobj`, `poss`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` |
| **`ner`** | `CARDINAL`, `DATE`, `FAC`, `GPE`, `LANGUAGE`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` |
| **`textcat`** | `Blaming_Geopolitics`, `Blaming_Government`, `Blaming_Migrants`, `No_Frustration`, `Uses_Infrastructure` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 99.94 |
| `POS_ACC` | 99.94 |
| `MORPH_ACC` | 99.93 |
| `DEP_UAS` | 99.80 |
| `DEP_LAS` | 98.90 |
| `SENTS_P` | 100.00 |
| `SENTS_R` | 100.00 |
| `SENTS_F` | 100.00 |
| `ENTS_F` | 99.71 |
| `ENTS_P` | 99.71 |
| `ENTS_R` | 99.71 |
| `CATS_SCORE` | 99.43 |
| `CATS_MICRO_P` | 99.43 |
| `CATS_MICRO_R` | 99.43 |
| `CATS_MICRO_F` | 99.43 |
| `CATS_MACRO_P` | 99.43 |
| `CATS_MACRO_R` | 99.43 |
| `CATS_MACRO_F` | 99.43 |
| `CATS_MACRO_AUC` | 100.00 |
| `CATS_MACRO_AUC_PER_TYPE` | 0.00 |
| `TOK2VEC_LOSS` | 441813.62 |
| `TAGGER_LOSS` | 3246.21 |
| `MORPHOLOGIZER_LOSS` | 3554.80 |
| `PARSER_LOSS` | 333496.66 |
| `NER_LOSS` | 6933.11 |
| `TEXTCAT_LOSS` | 1.50 |
|
Elifr/clasificador-sentimientos-pln-uned
|
Elifr
|
electra
| 10 | 4 |
transformers
| 0 |
text-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['classification', 'generated_from_trainer']
| true | true | true | 1,379 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-sentimientos-pln-uned
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3848
- Accuracy: 0.4297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3848 | 0.3806 |
| 1.4224 | 2.0 | 776 | 1.2911 | 0.4090 |
| 1.0722 | 3.0 | 1164 | 1.3848 | 0.4297 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
yonas/stt_rw_sw_lg_conformer_ctc_large
|
yonas
| null | 3 | 6 |
nemo
| 0 |
automatic-speech-recognition
| true | false | false |
cc-by-4.0
|
['rw']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'speech', 'ASR', 'Kinyarwanda', 'Swahili', 'Luganda', 'Multilingual', 'audio', 'CTC', 'Conformer', 'Transformer', 'NeMo', 'pytorch']
| true | true | true | 2,167 |
## Model Overview
<DESCRIBE IN ONE LINE THE MODEL AND ITS USE>
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.ASRModel.from_pretrained("yonas/stt_rw_sw_lg_conformer_ctc_large")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="yonas/stt_rw_sw_lg_conformer_ctc_large" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
<ADD SOME INFORMATION ABOUT THE ARCHITECTURE>
## Training
<ADD INFORMATION ABOUT HOW THE MODEL WAS TRAINED - HOW MANY EPOCHS, AMOUNT OF COMPUTE ETC>
### Datasets
<LIST THE NAME AND SPLITS OF DATASETS USED TO TRAIN THIS MODEL (ALONG WITH LANGUAGE AND ANY ADDITIONAL INFORMATION)>
## Performance
<LIST THE SCORES OF THE MODEL -
OR
USE THE Hugging Face Evaluate LiBRARY TO UPLOAD METRICS>
## Limitations
<DECLARE ANY POTENTIAL LIMITATIONS OF THE MODEL>
Eg:
Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## References
<ADD ANY REFERENCES HERE AS NEEDED>
[1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
|
YoriV/Reinforce-CartPole-v1
|
YoriV
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
zmaro/zmaroavatar
|
zmaro
| null | 16 | 46 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 418 |
### zmaroavatar Dreambooth model trained by zmaro with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
fathyshalab/massive_datetime-roberta-large-v1-2-0.82
|
fathyshalab
|
roberta
| 14 | 2 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,470 |
# fathyshalab/massive_datetime-roberta-large-v1-2-0.82
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_datetime-roberta-large-v1-2-0.82")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Izc/stt_rw_sw_lg_conformer_ctc_large
|
Izc
| null | 3 | 1 |
nemo
| 0 |
automatic-speech-recognition
| true | false | false |
cc-by-4.0
|
['rw']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'speech', 'ASR', 'Kinyarwanda', 'Swahili', 'Luganda', 'Multilingual', 'audio', 'CTC', 'Conformer', 'Transformer', 'NeMo', 'pytorch']
| true | true | true | 2,163 |
## Model Overview
<DESCRIBE IN ONE LINE THE MODEL AND ITS USE>
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.ASRModel.from_pretrained("Izc/stt_rw_sw_lg_conformer_ctc_large")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="Izc/stt_rw_sw_lg_conformer_ctc_large" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
<ADD SOME INFORMATION ABOUT THE ARCHITECTURE>
## Training
<ADD INFORMATION ABOUT HOW THE MODEL WAS TRAINED - HOW MANY EPOCHS, AMOUNT OF COMPUTE ETC>
### Datasets
<LIST THE NAME AND SPLITS OF DATASETS USED TO TRAIN THIS MODEL (ALONG WITH LANGUAGE AND ANY ADDITIONAL INFORMATION)>
## Performance
<LIST THE SCORES OF THE MODEL -
OR
USE THE Hugging Face Evaluate LiBRARY TO UPLOAD METRICS>
## Limitations
<DECLARE ANY POTENTIAL LIMITATIONS OF THE MODEL>
Eg:
Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## References
<ADD ANY REFERENCES HERE AS NEEDED>
[1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
|
vvn0/a2c-PandaReachDense-v2
|
vvn0
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
frangiral/dqn-SpaceInvadersNoFrameskip-v4-2
|
frangiral
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,219 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga frangiral -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga frangiral -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga frangiral
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 50000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
pfunk/Pong-v4-DQPN_p50_e0.10-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 1,989 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p50_e0.10.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p50_e0.10]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p50_e0.10 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_e0.10-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_e0.10-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_e0.10-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p50_e0.10 --start-policy-f 50000 --end-policy-f 1000 --evaluation-fraction 0.10 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 1000,
'env_id': 'Pong-v4',
'evaluation_fraction': 0.1,
'exp_name': 'DQPN_p50_e0.10',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 50000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
YoriV/Reinforce-PixelCopter
|
YoriV
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
apatidar0/pegasus_conversation-summ
|
apatidar0
|
pegasus
| 15 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null |
['samsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,005 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus_conversation-summ
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
tomasabril/bonusunit1
|
tomasabril
| null | 32 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 822 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: tomasabril/bonusunit1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ChemicalM/aaacups_LoRA
|
ChemicalM
| null | 3 | 0 | null | 1 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 431 |
LoRA created from: https://civitai.com/models/4462/aaacups by https://civitai.com/user/modelsforall
Currently testing, on photorealistic models. Weights between 0.5 and 2.0 seem to give good results depending on the proportions of the starting model/img and desired amount of reduction. Try a higher value on something like URPMv1.2, lower for Realistic Vision V1.3. Going too high reduces skin texture and introduces artifacting.
|
OrK7/parler_hate_speech
|
OrK7
|
deberta-v2
| 9 | 9 |
transformers
| 1 |
text-classification
| true | false | false | null |
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['hate', 'hate_speech']
| false | true | true | 3,508 |
<a href="https://colab.research.google.com/github/OrKatz7/parler-hate-speech/blob/main/colab_demo.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
Social Network Hate Detection: Finding Social Media Posts Containing Hateful Information Using Ensemble Methods and Back-Translation
Recent research efforts have been directed toward the development of automated systems for detecting hateful content to assist social media providers in identifying and removing such content before it can be viewed by the public. This paper introduces a unique ensemble approach that utilizes DeBERTa models, which benefits from pre-training on massive synthetic data and the integration of back-translation techniques during training and testing. Our findings reveal that this approach delivers state-of-the-art results in hate-speech detection. The results demonstrate that the combination of back-translation, ensemble, and test-time augmentation results in a considerable improvement across various metrics and models in both the Parler and GAB datasets. We show that our method reduces models’ bias in an effective and meaningful way, and also reduces the RMSE from 0.838 to around 0.766 and increases R-squared from 0.520 to 0.599. The biggest improvement was seen in small Deberate models, while for large models, there was either a minor improvement or no change.
## Results
<img src="https://github.com/OrKatz7/parler-hate-speech/blob/main/docs/parler_results.jpeg?raw=true">
```
!pip install huggingface_hub
!pip install tokenizers transformers
!pip install iterative-stratification
!git clone https://github.com/OrKatz7/parler-hate-speech
%cd parler-hate-speech/src
```
```
from huggingface_hub import hf_hub_download
import torch
import sys
from model import CustomModel,MeanPooling
from transformers import AutoTokenizer, AutoModel, AutoConfig
import numpy as np
class CFG:
model="microsoft/deberta-v3-base"
target_cols=['label_mean']
```
```
name = "OrK7/parler_hate_speech"
downloaded_model_path = hf_hub_download(repo_id=name, filename="pytorch_model.bin")
model = torch.load(downloaded_model_path)
tokenizer = AutoTokenizer.from_pretrained(name)
```
```
def prepare_input(text):
inputs = tokenizer.encode_plus(
text,
return_tensors=None,
add_special_tokens=True,
max_length=512,
pad_to_max_length=True,
truncation=True
)
for k, v in inputs.items():
inputs[k] = torch.tensor(np.array(v).reshape(1,-1), dtype=torch.long)
return inputs
def collate(inputs):
mask_len = int(inputs["attention_mask"].sum(axis=1).max())
for k, v in inputs.items():
inputs[k] = inputs[k][:,:mask_len]
return inputs
```
```
from transformers import Pipeline
class HatePipeline(Pipeline):
def _sanitize_parameters(self, **kwargs):
preprocess_kwargs = {}
if "maybe_arg" in kwargs:
preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"]
return preprocess_kwargs, {}, {}
def preprocess(self, inputs):
out = prepare_input(inputs)
return collate(out)
def _forward(self, model_inputs):
outputs = self.model(model_inputs)
return outputs
def postprocess(self, model_outputs):
return np.array(model_outputs[0,0].numpy()).clip(0,1)*4+1
```
```
pipe = HatePipeline(model=model)
pipe("I Love you #")
```
results: 1.0
```
pipe("I Hate #$%#$%Jewish%$#@%^^@#")
```
results: 4.155200004577637
|
kkarpou/autotrain-greek-sentiment-analysis-3351392404
|
kkarpou
|
deberta-v2
| 9 | 9 |
transformers
| 0 |
text-classification
| true | false | false | null |
['en']
|
['kkarpou/autotrain-data-greek-sentiment-analysis']
|
{'emissions': 4.129267471119826}
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['autotrain', 'text-classification']
| false | true | true | 1,126 |
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 3351392404
- CO2 Emissions (in grams): 4.1293
## Validation Metrics
- Loss: 0.479
- Accuracy: 0.844
- Macro F1: 0.844
- Micro F1: 0.844
- Weighted F1: 0.843
- Macro Precision: 0.847
- Micro Precision: 0.844
- Weighted Precision: 0.849
- Macro Recall: 0.846
- Micro Recall: 0.844
- Weighted Recall: 0.844
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/kkarpou/autotrain-greek-sentiment-analysis-3351392404
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("kkarpou/autotrain-greek-sentiment-analysis-3351392404", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("kkarpou/autotrain-greek-sentiment-analysis-3351392404", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Luisfrdz/PPO-RL-1-LunarLander-v3
|
Luisfrdz
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 368 |
# **PPO-RL-Agent** Agent playing **LunarLander-v2**
This is a trained model of a **PPO-RL-Agent** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
fjaragones/q-FrozenLake-v1-4x4-noSlippery
|
fjaragones
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 399 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="fjaragones/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sgoodfriend/poca-SoccerTwos-v3
|
sgoodfriend
| null | 20 | 265 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 848 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: sgoodfriend/poca-SoccerTwos-v3
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
fjaragones/Taxi-v3
|
fjaragones
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 364 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="fjaragones/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kasrahabib/FInall__XXX08_02_23-bucket-finetunned
|
kasrahabib
|
bert
| 10 | 2 |
transformers
| 0 |
text-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,745 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kasrahabib/FInall__XXX08_02_23-bucket-finetunned
This model is a fine-tuned version of [kasrahabib/XXX08_02_23__-bucket-finetunned](https://huggingface.co/kasrahabib/XXX08_02_23__-bucket-finetunned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0954
- Validation Loss: 0.3458
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 10940, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4270 | 0.3147 | 0 |
| 0.2937 | 0.3138 | 1 |
| 0.2277 | 0.3316 | 2 |
| 0.1909 | 0.3294 | 3 |
| 0.1589 | 0.3413 | 4 |
| 0.1374 | 0.3520 | 5 |
| 0.1223 | 0.3439 | 6 |
| 0.1112 | 0.3468 | 7 |
| 0.1034 | 0.3494 | 8 |
| 0.0954 | 0.3458 | 9 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ajders/ddisco_classifier
|
ajders
|
bert
| 86 | 142 |
transformers
| 0 |
text-classification
| true | false | false |
cc-by-4.0
|
['da']
|
['ajders/ddisco']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,996 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# da-discourse-coherence-base
This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) on the [DDisco](https://huggingface.co/datasets/ajders/ddisco) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7487
- Accuracy: 0.6915
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 703
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 6.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3422 | 0.4 | 5 | 1.0166 | 0.5721 |
| 0.9645 | 0.8 | 10 | 0.8966 | 0.5721 |
| 0.9854 | 1.24 | 15 | 0.8499 | 0.5721 |
| 0.8628 | 1.64 | 20 | 0.8379 | 0.6517 |
| 0.9046 | 2.08 | 25 | 0.8228 | 0.5721 |
| 0.8361 | 2.48 | 30 | 0.7980 | 0.5821 |
| 0.8158 | 2.88 | 35 | 0.8095 | 0.5821 |
| 0.8689 | 3.32 | 40 | 0.7989 | 0.6169 |
| 0.8125 | 3.72 | 45 | 0.7730 | 0.6965 |
| 0.843 | 4.16 | 50 | 0.7566 | 0.6418 |
| 0.7421 | 4.56 | 55 | 0.7840 | 0.6517 |
| 0.7949 | 4.96 | 60 | 0.7531 | 0.6915 |
| 0.828 | 5.4 | 65 | 0.7464 | 0.6816 |
| 0.7438 | 5.8 | 70 | 0.7487 | 0.6915 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.0a0+d0d6b1f
- Datasets 2.9.0
- Tokenizers 0.13.2
### Contributor
[ajders](https://github.com/AJDERS)
|
kitintouch/kit-the-bear
|
kitintouch
| null | 4 | 0 | null | 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 1,532 |
### kit the bear Dreambooth model trained by kitintouch with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
kitthebear (use that on your prompt)

|
augustogeog/q-FrozenLake-v1-noslippery
|
augustogeog
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 396 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="augustogeog/q-FrozenLake-v1-noslippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
lllyasviel/ControlNet
|
lllyasviel
| null | 18 | 0 | null | 56 | null | false | false | false |
openrail
| null | null | null | 0 | 0 | 0 | 0 | 2 | 1 | 1 |
[]
| false | true | true | 2,706 |
This is the pretrained weights and some other detector weights of ControlNet.
See also: https://github.com/lllyasviel/ControlNet
# Description of Files
ControlNet/models/control_sd15_canny.pth
- The ControlNet+SD1.5 model to control SD using canny edge detection.
ControlNet/models/control_sd15_depth.pth
- The ControlNet+SD1.5 model to control SD using Midas depth estimation.
ControlNet/models/control_sd15_hed.pth
- The ControlNet+SD1.5 model to control SD using HED edge detection (soft edge).
ControlNet/models/control_sd15_mlsd.pth
- The ControlNet+SD1.5 model to control SD using M-LSD line detection (will also work with traditional Hough transform).
ControlNet/models/control_sd15_normal.pth
- The ControlNet+SD1.5 model to control SD using normal map. Best to use the normal map generated by that Gradio app. Other normal maps may also work as long as the direction is correct (left looks red, right looks blue, up looks green, down looks purple).
ControlNet/models/control_sd15_openpose.pth
- The ControlNet+SD1.5 model to control SD using OpenPose pose detection. Directly manipulating pose skeleton should also work.
ControlNet/models/control_sd15_scribble.pth
- The ControlNet+SD1.5 model to control SD using human scribbles. The model is trained with boundary edges with very strong data augmentation to simulate boundary lines similar to that drawn by human.
ControlNet/models/control_sd15_seg.pth
- The ControlNet+SD1.5 model to control SD using semantic segmentation. The protocol is ADE20k.
ControlNet/annotator/ckpts/body_pose_model.pth
- Third-party model: Openpose’s pose detection model.
ControlNet/annotator/ckpts/hand_pose_model.pth
- Third-party model: Openpose’s hand detection model.
ControlNet/annotator/ckpts/dpt_hybrid-midas-501f0c75.pt
- Third-party model: Midas depth estimation model.
ControlNet/annotator/ckpts/mlsd_large_512_fp32.pth
- Third-party model: M-LSD detection model.
ControlNet/annotator/ckpts/mlsd_tiny_512_fp32.pth
- Third-party model: M-LSD’s another smaller detection model (we do not use this one).
ControlNet/annotator/ckpts/network-bsds500.pth
- Third-party model: HED boundary detection.
ControlNet/annotator/ckpts/upernet_global_small.pth
- Third-party model: Uniformer semantic segmentation.
ControlNet/training/fill50k.zip
- The data for our training tutorial.
# Misuse, Malicious Use, and Out-of-Scope Use
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
|
albertqueralto/ppo-SnowballTarget
|
albertqueralto
| null | 20 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 861 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: albertqueralto/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sgoodfriend/PPO-sb3-LunarLander-v2
|
sgoodfriend
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
augustogeog/q-Taxi-v3
|
augustogeog
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 367 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="augustogeog/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
pfunk/Pong-v4-DQPN_p50_e0.25-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 1,990 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p50_e0.25.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p50_e0.25]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p50_e0.25 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_e0.25-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_e0.25-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_e0.25-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p50_e0.25 --start-policy-f 50000 --end-policy-f 1000 --evaluation-fraction 0.25 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 1000,
'env_id': 'Pong-v4',
'evaluation_fraction': 0.25,
'exp_name': 'DQPN_p50_e0.25',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 50000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
Iggg0r/rl_course
|
Iggg0r
| null | 19 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pneubauer/basic-poca-SoccerTwos_1
|
pneubauer
| null | 7 | 259 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 851 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: pneubauer/basic-poca-SoccerTwos_1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Snim/dqn-SpaceInvadersNoFrameskip-v4
|
Snim
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,205 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Snim -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Snim -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Snim
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
kasrahabib/FInall_35__XXX08_02_23-bucket-finetunned
|
kasrahabib
|
bert
| 10 | 2 |
transformers
| 0 |
text-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,747 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kasrahabib/FInall_35__XXX08_02_23-bucket-finetunned
This model is a fine-tuned version of [kasrahabib/XXX08_02_23__-bucket-finetunned](https://huggingface.co/kasrahabib/XXX08_02_23__-bucket-finetunned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0836
- Validation Loss: 0.2277
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7660, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.3305 | 0.2448 | 0 |
| 0.2215 | 0.2168 | 1 |
| 0.1728 | 0.2316 | 2 |
| 0.1443 | 0.2283 | 3 |
| 0.1225 | 0.2261 | 4 |
| 0.1096 | 0.2232 | 5 |
| 0.1027 | 0.2235 | 6 |
| 0.0937 | 0.2273 | 7 |
| 0.0876 | 0.2278 | 8 |
| 0.0836 | 0.2277 | 9 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
johko/mcc_co3dv2_all_categories
|
johko
| null | 3 | 0 | null | 0 | null | false | false | false |
apache-2.0
| null |
['CO3Dv2']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['3D Reconstruction']
| false | true | true | 399 |
# Multiview Compressive Coding (MCC)
## Model Description
These are model weights originally provided by the authors of the paper [Multiview Compressive Coding (MCC)](https://arxiv.org/abs/2301.08247).
Their method aims to create a 3D multiview object from a single RGB-D image.
## Datasets
The authors trained the model on [the CO3D v2 dataset](https://ai.facebook.com/datasets/CO3D-dataset/)
|
saurabhnaik/dqn-SpaceInvadersNoFrameskip-v4
|
saurabhnaik
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,227 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga saurabhnaik -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga saurabhnaik -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga saurabhnaik
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
mfrayha/marcelo
|
mfrayha
| null | 19 | 35 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 435 |
### Marcelo Dreambooth model trained by mfrayha with [buildspace's DreamBooth](https://colab.research.google.com/github/buildspace/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) notebook
Build your own using the [AI Avatar project](https://buildspace.so/builds/ai-avatar)!
To get started head over to the [project dashboard](https://buildspace.so/p/build-ai-avatars).
Sample pictures of this concept:
|
DeepaKrish/roberta-base-finetuned-squad
|
DeepaKrish
|
roberta
| 13 | 5 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,192 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 27 | 0.1224 |
| No log | 2.0 | 54 | 0.0491 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.9.0
- Datasets 2.5.1
- Tokenizers 0.13.2
|
DarwinAnim8or/GPT-Grug-355m
|
DarwinAnim8or
|
gpt2
| 10 | 11 |
transformers
| 0 |
text-generation
| true | false | false |
mit
|
['en']
|
['DarwinAnim8or/grug']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['grug', 'caveman', 'fun']
| false | true | true | 1,690 |
# GPT-Grug-355m
A finetuned version of [GPT2-Medium](https://huggingface.co/gpt2-medium) on the 'grug' dataset.
A demo is available [here](https://huggingface.co/spaces/DarwinAnim8or/grug-chat)
If you're interested, there's a smaller model available here: [GPT-Grug-125m](https://huggingface.co/DarwinAnim8or/gpt-grug-125m)
Do note however that it is very limited by comparison.
# Training Procedure
This was trained on the 'grug' dataset, using the "HappyTransformers" library on Google Colab.
This model was trained for 4 epochs with learning rate 1e-2.
The notebook used to train has been included in this repo.
# Biases & Limitations
This likely contains the same biases and limitations as the original GPT2 that it is based on, and additionally heavy biases from the grug datasets.
# Intended Use
This model is meant for fun, please do not take anything this caveman says seriously.
# Sample Use
```python
#Import model:
from happytransformer import HappyGeneration
happy_gen = HappyGeneration("GPT2", "DarwinAnim8or/gpt-grug-355m")
#Set generation settings:
from happytransformer import GENSettings
args_top_k = GENSettings(no_repeat_ngram_size=2, do_sample=True,top_k=50, temperature=0.7, max_length=50, early_stopping=False)
#Generate a response:
result = happy_gen.generate_text("""Person: "Hello grug"
Grug: "hello person"
###
Person: "how are you grug"
Grug: "grug doing ok. grug find many berry. good for tribe."
###
Person: "what does grug think of new spear weapon?"
Grug: "grug no like new spear weapon. grug stick bigger. spear too small, break easy"
###
Person: "what does grug think of football?"
Grug: \"""", args=args_top_k)
print(result)
print(result.text)
```
|
HueyNemud/icdar23-entrydetector_plaintext
|
HueyNemud
|
camembert
| 11 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,488 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# icdar23-entrydetector_plaintext
This model is a fine-tuned version of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0424
- Ebegin: {'precision': 0.9725125822686799, 'recall': 0.9447160586686725, 'f1': 0.9584128195345288, 'number': 2659}
- Eend: {'precision': 0.9570211189329382, 'recall': 0.9652466367713004, 'f1': 0.9611162790697675, 'number': 2676}
- Overall Precision: 0.9646
- Overall Recall: 0.9550
- Overall F1: 0.9598
- Overall Accuracy: 0.9923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.07 | 300 | 0.0487 | 0.9874 | 0.9565 | 0.9717 | 0.9943 |
| 0.1698 | 0.14 | 600 | 0.0310 | 0.9891 | 0.9709 | 0.9799 | 0.9959 |
| 0.1698 | 0.21 | 900 | 0.0267 | 0.9746 | 0.9764 | 0.9755 | 0.9953 |
| 0.0346 | 0.29 | 1200 | 0.0217 | 0.9885 | 0.9685 | 0.9784 | 0.9956 |
| 0.0237 | 0.36 | 1500 | 0.0201 | 0.9866 | 0.9742 | 0.9804 | 0.9960 |
| 0.0237 | 0.43 | 1800 | 0.0268 | 0.9883 | 0.9561 | 0.9719 | 0.9944 |
| 0.0205 | 0.5 | 2100 | 0.0216 | 0.9823 | 0.9779 | 0.9801 | 0.9959 |
| 0.0205 | 0.57 | 2400 | 0.0236 | 0.9874 | 0.9700 | 0.9787 | 0.9957 |
| 0.0196 | 0.64 | 2700 | 0.0246 | 0.9877 | 0.9668 | 0.9772 | 0.9954 |
| 0.0195 | 0.72 | 3000 | 0.0254 | 0.9789 | 0.9682 | 0.9735 | 0.9950 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
kasrahabib/Sentence_transformer_FInall_35__XXX08_02_23-bucket-finetunned
|
kasrahabib
|
bert
| 10 | 2 |
transformers
| 0 |
text-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,719 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kasrahabib/Sentence_transformer_FInall_35__XXX08_02_23-bucket-finetunned
This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2971
- Validation Loss: 0.8087
- Epoch: 8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7660, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.1510 | 1.4873 | 0 |
| 1.2438 | 1.0710 | 1 |
| 0.8797 | 0.9329 | 2 |
| 0.6640 | 0.8707 | 3 |
| 0.5321 | 0.8403 | 4 |
| 0.4386 | 0.8146 | 5 |
| 0.3740 | 0.8131 | 6 |
| 0.3240 | 0.8202 | 7 |
| 0.2971 | 0.8087 | 8 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Kaludi/food-category-classification-v2.0
|
Kaludi
|
swin
| 5 | 203 |
transformers
| 0 |
image-classification
| true | false | false | null | null |
['Kaludi/food-category-classification-v2.0']
|
{'emissions': 12.456278925446485}
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['vision', 'image-classification']
| false | true | true | 1,592 |
# Food Category Classification v2.0
This is an updated Food Category Image Classifier model of the [old](https://huggingface.co/Kaludi/food-category-classification) model that has been trained by [Kaludi](https://huggingface.co/Kaludi) to recognize **12** different categories of foods, which includes **Bread**, **Dairy**, **Dessert**, **Egg**, **Fried Food**, **Fruit**, **Meat**, **Noodles**, **Rice**, **Seafood**, **Soup**, and **Vegetable**. It can accurately classify an image of food into one of these categories by analyzing its visual features. This model can be used by food bloggers, restaurants, and recipe websites to quickly categorize and sort their food images, making it easier to manage their content and provide a better user experience.
### Gradio
This model supports a [Gradio](https://github.com/gradio-app/gradio) Web UI to run the data-food-classification model:
[](https://huggingface.co/spaces/Kaludi/Food-Category-Classification_V2_App)
## Validation Metrics
- Problem type: Multi-class Classification
- Model ID: 3353292434
- CO2 Emissions (in grams): 12.4563
- Loss: 0.144
- Accuracy: 0.960
- Macro F1: 0.959
- Micro F1: 0.960
- Weighted F1: 0.959
- Macro Precision: 0.962
- Micro Precision: 0.960
- Weighted Precision: 0.962
- Macro Recall: 0.960
- Micro Recall: 0.960
- Weighted Recall: 0.960
|
Anjoe/poetry-gpt2-large-no-hoel_3
|
Anjoe
|
gpt2
| 14 | 25 |
transformers
| 1 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,246 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poetry-gpt2-large-no-hoel_3
This model is a fine-tuned version of [benjamin/gerpt2-large](https://huggingface.co/benjamin/gerpt2-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.678 | 1.0 | 19917 | 3.7393 |
| 3.3445 | 2.0 | 39834 | 3.7234 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Anjoe/poetry-gpt2-large-no_schiller_3
|
Anjoe
|
gpt2
| 14 | 35 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,250 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poetry-gpt2-large-no_schiller_3
This model is a fine-tuned version of [benjamin/gerpt2-large](https://huggingface.co/benjamin/gerpt2-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.6925 | 1.0 | 20041 | 3.7494 |
| 3.3496 | 2.0 | 40082 | 3.7301 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Anjoe/poetry-gpt2-large-complete_3
|
Anjoe
|
gpt2
| 14 | 25 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,247 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poetry-gpt2-large-complete_3
This model is a fine-tuned version of [benjamin/gerpt2-large](https://huggingface.co/benjamin/gerpt2-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.6838 | 1.0 | 20371 | 3.7627 |
| 3.3382 | 2.0 | 40742 | 3.7483 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gupta99riya/mbert-fine-tune-ner_fr
|
gupta99riya
|
bert
| 13 | 9 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['wikiann']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,456 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-fine-tune-ner_fr
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1887
- Precision: 0.9000
- Recall: 0.9078
- F1: 0.9038
- Accuracy: 0.9491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2929 | 1.0 | 1250 | 0.2028 | 0.8782 | 0.8938 | 0.8860 | 0.9417 |
| 0.1355 | 2.0 | 2500 | 0.1887 | 0.9000 | 0.9078 | 0.9038 | 0.9491 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
SRobbins/a2c-PandaReachDense-v2
|
SRobbins
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aimarsg/pharmacoNER
|
aimarsg
|
roberta
| 13 | 14 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['pharmaconer']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,539 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pharmacoNER
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the pharmaconer dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0251
- Precision: 0.9058
- Recall: 0.9026
- F1: 0.9042
- Accuracy: 0.9948
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0272 | 1.0 | 1017 | 0.0288 | 0.8047 | 0.8503 | 0.8269 | 0.9914 |
| 0.0114 | 2.0 | 2034 | 0.0240 | 0.8950 | 0.8998 | 0.8974 | 0.9945 |
| 0.006 | 3.0 | 3051 | 0.0251 | 0.9058 | 0.9026 | 0.9042 | 0.9948 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
HueyNemud/icdar23-entrydetector_plaintext_breaks
|
HueyNemud
|
camembert
| 11 | 2 |
transformers
| 0 |
token-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,771 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# icdar23-entrydetector_plaintext_breaks
This model is a fine-tuned version of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0120
- Ebegin: {'precision': 0.9901997738409348, 'recall': 0.9879654005265137, 'f1': 0.9890813253012049, 'number': 2659}
- Eend: {'precision': 0.9916824196597354, 'recall': 0.9801943198804185, 'f1': 0.9859049050930276, 'number': 2676}
- Overall Precision: 0.9909
- Overall Recall: 0.9841
- Overall F1: 0.9875
- Overall Accuracy: 0.9977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.07 | 300 | 0.0318 | 0.9847 | 0.9761 | 0.9803 | 0.9966 |
| 0.1683 | 0.14 | 600 | 0.0164 | 0.9878 | 0.9890 | 0.9884 | 0.9978 |
| 0.1683 | 0.21 | 900 | 0.0146 | 0.9900 | 0.9853 | 0.9876 | 0.9976 |
| 0.0203 | 0.29 | 1200 | 0.0112 | 0.9862 | 0.9902 | 0.9882 | 0.9978 |
| 0.0123 | 0.36 | 1500 | 0.0089 | 0.9943 | 0.9878 | 0.9910 | 0.9983 |
| 0.0123 | 0.43 | 1800 | 0.0139 | 0.9970 | 0.9814 | 0.9891 | 0.9979 |
| 0.0109 | 0.5 | 2100 | 0.0101 | 0.9937 | 0.9882 | 0.9909 | 0.9982 |
| 0.0109 | 0.57 | 2400 | 0.0087 | 0.9949 | 0.9896 | 0.9922 | 0.9985 |
| 0.0092 | 0.64 | 2700 | 0.0081 | 0.9849 | 0.9919 | 0.9884 | 0.9978 |
| 0.0084 | 0.72 | 3000 | 0.0087 | 0.9937 | 0.9867 | 0.9902 | 0.9981 |
| 0.0084 | 0.79 | 3300 | 0.0089 | 0.9915 | 0.9889 | 0.9902 | 0.9981 |
| 0.0069 | 0.86 | 3600 | 0.0092 | 0.9899 | 0.9901 | 0.9900 | 0.9981 |
| 0.0069 | 0.93 | 3900 | 0.0097 | 0.9845 | 0.9915 | 0.9880 | 0.9977 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
huggingtweets/101dadjokes-dadsjokes
|
huggingtweets
|
gpt2
| 11 | 2 |
transformers
| 0 |
text-generation
| true | false | false | null |
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['huggingtweets']
| false | true | true | 3,498 |
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1406653045757317121/YCS9YykL_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/641271414/dad_jokes_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dad Jokes & Dad Jokes</div>
<div style="text-align: center; font-size: 14px;">@101dadjokes-dadsjokes</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dad Jokes & Dad Jokes.
| Data | Dad Jokes | Dad Jokes |
| --- | --- | --- |
| Tweets downloaded | 184 | 2043 |
| Retweets | 14 | 0 |
| Short tweets | 10 | 123 |
| Tweets kept | 160 | 1920 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/od2iwqt2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @101dadjokes-dadsjokes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/7ruisgab) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/7ruisgab/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/101dadjokes-dadsjokes')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sanchit-gandhi/ast-fleurs-langid-dropout-0.2-layers-6
|
sanchit-gandhi
|
audio-spectrogram-transformer
| 9 | 0 |
transformers
| 0 |
audio-classification
| true | false | false |
bsd-3-clause
| null |
['fleurs']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,522 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-fleurs-langid-dropout-0.2-layers-6
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 7.4304
- Accuracy: 0.1802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0405 | 1.0 | 16987 | 6.6986 | 0.1722 |
| 0.0002 | 2.0 | 33974 | 7.1284 | 0.1811 |
| 0.0 | 3.0 | 50961 | 7.4304 | 0.1802 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
WitchHuntTV/XiJinPing_Singing
|
WitchHuntTV
| null | 3 | 0 | null | 0 | null | false | false | false |
gpl-3.0
|
['zh']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['VITS', 'so-vits3']
| false | true | true | 288 |
用于so-vits3的习近平歌声模型。欢迎任何人使用此模型进行创作,模型不定期更新,直到100k step为止。
训练模型不易,显卡烧的都是钱,请考虑赞助,门罗币(XMR)地址:87MTHJgrCRfC6S8hV1TNiV1SyYZ19ny7o8YHui462TYKiVLzUpdTHDCfErqbSSSe4GMriVEfM2xK6eG87sEwvQPj4LMBqdD
so-vits-svc项目地址:https://github.com/innnky/so-vits-svc
下载模型后,无需处理数据集也无需训练,直接跳到推理部分。
推理教程参考B站cv20533940
|
hulkster/sd-class-butterflies-32
|
hulkster
| null | 6 | 2 |
diffusers
| 0 |
unconditional-image-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class']
| false | true | true | 365 |
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('hulkster/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
HueyNemud/icdar23-entrydetector_plaintext_breaks_indents_left_ref
|
HueyNemud
|
camembert
| 11 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,891 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# icdar23-entrydetector_plaintext_breaks_indents_left_ref
This model is a fine-tuned version of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0062
- Ebegin: {'precision': 0.997709049255441, 'recall': 0.9827002632568634, 'f1': 0.9901477832512315, 'number': 2659}
- Eend: {'precision': 0.9973363774733638, 'recall': 0.9794469357249627, 'f1': 0.9883107088989442, 'number': 2676}
- Overall Precision: 0.9975
- Overall Recall: 0.9811
- Overall F1: 0.9892
- Overall Accuracy: 0.9982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.07 | 300 | 0.0389 | 0.9556 | 0.9663 | 0.9609 | 0.9949 |
| 0.1775 | 0.14 | 600 | 0.0162 | 0.9854 | 0.9893 | 0.9873 | 0.9977 |
| 0.1775 | 0.21 | 900 | 0.0114 | 0.9928 | 0.9889 | 0.9909 | 0.9984 |
| 0.0229 | 0.29 | 1200 | 0.0172 | 0.9793 | 0.9851 | 0.9822 | 0.9975 |
| 0.016 | 0.36 | 1500 | 0.0087 | 0.9906 | 0.9907 | 0.9907 | 0.9984 |
| 0.016 | 0.43 | 1800 | 0.0079 | 0.9955 | 0.9879 | 0.9917 | 0.9985 |
| 0.0115 | 0.5 | 2100 | 0.0093 | 0.9910 | 0.9912 | 0.9911 | 0.9984 |
| 0.0115 | 0.57 | 2400 | 0.0102 | 0.9816 | 0.9942 | 0.9878 | 0.9978 |
| 0.0109 | 0.64 | 2700 | 0.0072 | 0.9895 | 0.9939 | 0.9917 | 0.9985 |
| 0.0075 | 0.72 | 3000 | 0.0055 | 0.9919 | 0.9917 | 0.9918 | 0.9985 |
| 0.0075 | 0.79 | 3300 | 0.0078 | 0.9948 | 0.9910 | 0.9929 | 0.9987 |
| 0.007 | 0.86 | 3600 | 0.0057 | 0.9937 | 0.9933 | 0.9935 | 0.9989 |
| 0.007 | 0.93 | 3900 | 0.0059 | 0.9830 | 0.9957 | 0.9893 | 0.9981 |
| 0.0055 | 1.0 | 4200 | 0.0049 | 0.9972 | 0.9899 | 0.9935 | 0.9988 |
| 0.0029 | 1.07 | 4500 | 0.0064 | 0.9944 | 0.9926 | 0.9935 | 0.9989 |
| 0.0029 | 1.14 | 4800 | 0.0057 | 0.9927 | 0.9919 | 0.9923 | 0.9987 |
| 0.0043 | 1.22 | 5100 | 0.0064 | 0.9890 | 0.9945 | 0.9917 | 0.9986 |
| 0.0043 | 1.29 | 5400 | 0.0058 | 0.9857 | 0.9957 | 0.9907 | 0.9983 |
| 0.0028 | 1.36 | 5700 | 0.0049 | 0.9961 | 0.9922 | 0.9941 | 0.9990 |
| 0.0034 | 1.43 | 6000 | 0.0048 | 0.9952 | 0.9937 | 0.9945 | 0.9990 |
| 0.0034 | 1.5 | 6300 | 0.0050 | 0.9936 | 0.9937 | 0.9937 | 0.9989 |
| 0.0022 | 1.57 | 6600 | 0.0046 | 0.9937 | 0.9934 | 0.9936 | 0.9989 |
| 0.0022 | 1.65 | 6900 | 0.0042 | 0.9954 | 0.9929 | 0.9941 | 0.9990 |
| 0.0039 | 1.72 | 7200 | 0.0042 | 0.9959 | 0.9931 | 0.9945 | 0.9990 |
| 0.003 | 1.79 | 7500 | 0.0039 | 0.9968 | 0.9927 | 0.9947 | 0.9991 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
HueyNemud/icdar23-entrydetector_plaintext_breaks_indents_left_diff
|
HueyNemud
|
camembert
| 11 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,433 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# icdar23-entrydetector_plaintext_breaks_indents_left_diff
This model is a fine-tuned version of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0072
- Ebegin: {'precision': 0.9935508345978755, 'recall': 0.9849567506581421, 'f1': 0.9892351274787535, 'number': 2659}
- Eend: {'precision': 0.9980857580398163, 'recall': 0.9742152466367713, 'f1': 0.9860060514372163, 'number': 2676}
- Overall Precision: 0.9958
- Overall Recall: 0.9796
- Overall F1: 0.9876
- Overall Accuracy: 0.9980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.07 | 300 | 0.0309 | 0.9593 | 0.9879 | 0.9734 | 0.9955 |
| 0.161 | 0.14 | 600 | 0.0126 | 0.9890 | 0.9911 | 0.9900 | 0.9982 |
| 0.161 | 0.21 | 900 | 0.0116 | 0.9730 | 0.9894 | 0.9811 | 0.9971 |
| 0.0165 | 0.29 | 1200 | 0.0087 | 0.9938 | 0.9918 | 0.9928 | 0.9987 |
| 0.0119 | 0.36 | 1500 | 0.0093 | 0.9851 | 0.9937 | 0.9894 | 0.9981 |
| 0.0119 | 0.43 | 1800 | 0.0055 | 0.9942 | 0.9913 | 0.9928 | 0.9987 |
| 0.0091 | 0.5 | 2100 | 0.0057 | 0.9951 | 0.9904 | 0.9928 | 0.9987 |
| 0.0091 | 0.57 | 2400 | 0.0058 | 0.9920 | 0.9936 | 0.9928 | 0.9987 |
| 0.0083 | 0.64 | 2700 | 0.0059 | 0.9896 | 0.9918 | 0.9907 | 0.9983 |
| 0.0065 | 0.72 | 3000 | 0.0045 | 0.9968 | 0.9917 | 0.9942 | 0.9990 |
| 0.0065 | 0.79 | 3300 | 0.0047 | 0.9920 | 0.9937 | 0.9929 | 0.9987 |
| 0.0054 | 0.86 | 3600 | 0.0050 | 0.9945 | 0.9909 | 0.9926 | 0.9987 |
| 0.0054 | 0.93 | 3900 | 0.0064 | 0.9838 | 0.9968 | 0.9903 | 0.9983 |
| 0.0056 | 1.0 | 4200 | 0.0046 | 0.9971 | 0.9920 | 0.9946 | 0.9990 |
| 0.0034 | 1.07 | 4500 | 0.0037 | 0.9959 | 0.9936 | 0.9948 | 0.9990 |
| 0.0034 | 1.14 | 4800 | 0.0047 | 0.9983 | 0.9900 | 0.9941 | 0.9989 |
| 0.0035 | 1.22 | 5100 | 0.0043 | 0.9936 | 0.9951 | 0.9944 | 0.9990 |
| 0.0035 | 1.29 | 5400 | 0.0061 | 0.9892 | 0.9957 | 0.9925 | 0.9986 |
| 0.002 | 1.36 | 5700 | 0.0057 | 0.9898 | 0.9947 | 0.9923 | 0.9986 |
| 0.0048 | 1.43 | 6000 | 0.0042 | 0.9954 | 0.9933 | 0.9944 | 0.9990 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
kmposkid1/q-FrozenLake-v1-4x4-noSlippery
|
kmposkid1
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 398 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="kmposkid1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kmposkid1/q-Taxi-v3
|
kmposkid1
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 365 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="kmposkid1/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
slopezay/ppo-LunarLander-v2
|
slopezay
| null | 12 | 1 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DeepaKrish/roberta-base-squad2-finetuned
|
DeepaKrish
|
roberta
| 13 | 6 |
transformers
| 0 |
question-answering
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,223 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad2-finetuned
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 27 | 0.0023 |
| No log | 2.0 | 54 | 0.0010 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.9.0
- Datasets 2.5.1
- Tokenizers 0.13.2
|
pfunk/Pong-v4-DQPN_p500_pt0.1-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,002 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p500_pt0.1.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p500_pt0.1]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p500_pt0.1 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p500_pt0.1-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p500_pt0.1-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p500_pt0.1-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p500_pt0.1 --start-policy-f 500000 --end-policy-f 500000 --evaluation-fraction 1.00 --target-tau 1.0 --policy-tau 0.1 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 500000,
'env_id': 'Pong-v4',
'evaluation_fraction': 1.0,
'exp_name': 'DQPN_p500_pt0.1',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 0.1,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 500000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
Nonin/ppo-Huggy
|
Nonin
| null | 32 | 8 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 816 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: Nonin/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pryjuli/ppo-Huggy
|
pryjuli
| null | 32 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 818 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: pryjuli/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
HueyNemud/icdar23-entrydetector_plaintext_breaks_indents_left_ref_right_ref
|
HueyNemud
|
camembert
| 11 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,335 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# icdar23-entrydetector_plaintext_breaks_indents_left_ref_right_ref
This model is a fine-tuned version of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0068
- Ebegin: {'precision': 1.0, 'recall': 0.9793155321549455, 'f1': 0.9895496864905947, 'number': 2659}
- Eend: {'precision': 0.9988562714449104, 'recall': 0.9790732436472347, 'f1': 0.9888658237403285, 'number': 2676}
- Overall Precision: 0.9994
- Overall Recall: 0.9792
- Overall F1: 0.9892
- Overall Accuracy: 0.9983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.07 | 300 | 0.0309 | 0.9637 | 0.9910 | 0.9771 | 0.9964 |
| 0.181 | 0.14 | 600 | 0.0144 | 0.9777 | 0.9863 | 0.9819 | 0.9974 |
| 0.181 | 0.21 | 900 | 0.0095 | 0.9969 | 0.9845 | 0.9906 | 0.9985 |
| 0.0168 | 0.29 | 1200 | 0.0105 | 0.9869 | 0.9913 | 0.9891 | 0.9982 |
| 0.011 | 0.36 | 1500 | 0.0063 | 0.9937 | 0.9915 | 0.9926 | 0.9988 |
| 0.011 | 0.43 | 1800 | 0.0064 | 0.9883 | 0.9940 | 0.9911 | 0.9986 |
| 0.01 | 0.5 | 2100 | 0.0203 | 0.9552 | 0.9507 | 0.9529 | 0.9922 |
| 0.01 | 0.57 | 2400 | 0.0049 | 0.9946 | 0.9925 | 0.9935 | 0.9989 |
| 0.0144 | 0.64 | 2700 | 0.0056 | 0.9871 | 0.9944 | 0.9907 | 0.9984 |
| 0.0058 | 0.72 | 3000 | 0.0051 | 0.9928 | 0.9930 | 0.9929 | 0.9988 |
| 0.0058 | 0.79 | 3300 | 0.0036 | 0.9969 | 0.9920 | 0.9945 | 0.9991 |
| 0.0048 | 0.86 | 3600 | 0.0047 | 0.9930 | 0.9947 | 0.9938 | 0.9990 |
| 0.0048 | 0.93 | 3900 | 0.0053 | 0.9863 | 0.9965 | 0.9914 | 0.9985 |
| 0.0052 | 1.0 | 4200 | 0.0033 | 0.9985 | 0.9909 | 0.9947 | 0.9991 |
| 0.0029 | 1.07 | 4500 | 0.0039 | 0.9938 | 0.9954 | 0.9946 | 0.9991 |
| 0.0029 | 1.14 | 4800 | 0.0038 | 0.9981 | 0.9906 | 0.9943 | 0.9991 |
| 0.0034 | 1.22 | 5100 | 0.0044 | 0.9937 | 0.9934 | 0.9936 | 0.9989 |
| 0.0034 | 1.29 | 5400 | 0.0040 | 0.9884 | 0.9959 | 0.9921 | 0.9987 |
| 0.0027 | 1.36 | 5700 | 0.0040 | 0.9975 | 0.9910 | 0.9942 | 0.9990 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Isaacp/Reinforce-cartpole
|
Isaacp
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ramelol/ppo-LunarLander-v2
|
ramelol
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ivi137/q-FrozenLake-v1-4x4-noSlippery
|
ivi137
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 395 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ivi137/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
guydegnol/ppo-LunarLander-v2
|
guydegnol
| null | 12 | 6 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ivi137/Taxi-v3
|
ivi137
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 360 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ivi137/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Jomppe2/Face
|
Jomppe2
| null | 2 | 0 | null | 0 | null | false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 4,907 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
stelladk/Reinforce-PixelCopter-PLE-v0
|
stelladk
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jha2ee/riffusion-model-db
|
jha2ee
| null | 19 | 8 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 426 |
### riffusion_model-db Dreambooth model trained by jha2ee with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
ivi137/q-Taxi-v3
|
ivi137
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 362 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ivi137/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
popcornell/chime7_task1_asr1_baseline
|
popcornell
| null | 23 | 7 |
espnet
| 0 |
automatic-speech-recognition
| false | false | false |
cc-by-4.0
|
['en']
|
['chime7_task1']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'automatic-speech-recognition', 'speech separation']
| false | true | true | 10,211 |
## ESPnet2 ASR model
### `popcornell/chime7_task1_asr1_baseline`
This model was trained by popcornell using chime7_task1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [CHiME-7 DASR installation instructions](https://github.com/espnet/espnet/blob/master/egs2/chime7_task1/asr1/README.md)
if you haven't done that already.
```bash
cd espnet
git checkout 15646109f254de8b39bbe310827d617da5ac858d
# follow installation instruction for CHiME-7 DASR recipe https://github.com/espnet/espnet/blob/master/egs2/chime7_task1/asr1/README.md
./run.sh --decode-only 1 --use-pretrained popcornell/chime7_task1_asr1_baseline --ngpu PUT YOURS
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
See [CHiME-7 DASR README.md](https://github.com/espnet/espnet/blob/master/egs2/chime7_task1/asr1/README.md)
## Environments
- date: `Wed Feb 8 23:41:28 UTC 2023`
- python version: `3.9.2 (default, Mar 3 2021, 20:02:32) [GCC 7.3.0]`
- espnet version: `espnet 202301`
- pytorch version: `pytorch 1.13.1+cu116`
- Git hash: ``
- Commit date: ``
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_transformer_wavlm_lr1e-4_specaugm_accum1_preenc128_warmup20k.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_wavlm_lr1e-4_specaugm_accum1_preenc128_warmup20k_raw_en_bpe500_batch_size640_scheduler_confwarmup_steps8000_max_epoch8_optim_conflr0.000500000000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 5
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 44341
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 8
patience: 4
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 640
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe500_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe500_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe500_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe500_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/kaldi/train_all_mdm_ihm_rvb_gss_sp/wav.scp
- speech
- sound
- - dump/raw/kaldi/train_all_mdm_ihm_rvb_gss_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/kaldi/chime6/dev/gss/wav.scp
- speech
- sound
- - dump/raw/kaldi/chime6/dev/gss/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.0005
scheduler: warmuplr
scheduler_conf:
warmup_steps: 8000
token_list:
- <blank>
- <unk>
- s
- ''''
- ▁i
- t
- ▁it
- ▁a
- e
- ▁you
- ▁the
- ▁like
- ▁yeah
- a
- d
- ▁and
- m
- ▁that
- ▁to
- n
- i
- y
- ing
- o
- u
- ▁so
- p
- ▁of
- ▁in
- re
- ▁was
- c
- r
- ▁just
- er
- ▁know
- ▁oh
- ed
- ▁but
- ▁ummm
- ▁we
- l
- ▁no
- ▁they
- ▁have
- ▁do
- g
- ▁he
- k
- ll
- ▁uhhh
- ▁don
- ▁for
- h
- ▁what
- ▁be
- ar
- ▁is
- ▁there
- '-'
- ▁s
- ▁this
- in
- b
- ▁
- en
- ▁on
- ▁p
- ▁can
- al
- ▁not
- w
- ▁my
- ▁one
- ic
- f
- ▁or
- ▁really
- ▁go
- ▁right
- ▁me
- an
- ▁w
- or
- le
- ▁f
- ▁think
- ▁okay
- ▁all
- ▁then
- ▁with
- ▁are
- ▁get
- it
- ▁t
- ▁st
- ve
- ▁hmmm
- ▁g
- ▁if
- ce
- 'on'
- ▁she
- ▁good
- ▁e
- es
- ▁well
- v
- ▁re
- th
- ter
- ch
- ▁out
- ▁up
- ly
- ▁b
- ▁ma
- il
- ▁would
- ▁at
- ▁want
- ▁mean
- ▁ch
- ▁your
- ▁people
- ur
- ▁how
- ▁k
- ▁co
- ▁about
- ▁tr
- ▁ba
- ▁kind
- ▁when
- ▁mi
- ▁because
- ro
- ▁had
- ▁ho
- ▁gonna
- ▁time
- ▁more
- ▁got
- ▁some
- ▁two
- ▁did
- ▁see
- ▁now
- ▁pa
- ra
- ▁de
- ▁lot
- ▁actually
- ▁o
- ▁too
- ate
- ▁here
- ▁cuz
- ▁sp
- ▁where
- ▁going
- ▁j
- ▁from
- ▁bo
- ▁them
- ▁bu
- ▁put
- ▁thing
- ng
- ▁were
- ▁n
- ▁sh
- ▁work
- el
- ▁something
- ▁se
- ▁say
- ke
- ow
- ▁ca
- ▁fa
- ▁need
- sh
- ▁di
- ▁po
- ▁make
- la
- ▁br
- ▁v
- ▁an
- ▁who
- ion
- ▁y
- ▁look
- ▁didn
- ▁could
- ▁little
- ver
- ▁c
- ▁mo
- ▁much
- ▁very
- ir
- ▁sa
- ▁play
- ▁pretty
- ▁been
- ▁d
- ▁other
- ▁year
- and
- ▁mm
- ▁stuff
- ▁dr
- ▁why
- ▁con
- ▁su
- ▁back
- ▁ex
- ting
- ▁take
- ▁li
- ▁even
- ▁should
- ▁her
- ally
- lo
- ation
- ▁way
- ▁guess
- ▁has
- z
- ▁three
- ry
- ▁ha
- ies
- is
- x
- ▁ro
- ▁yes
- ▁th
- ▁use
- ▁down
- ous
- ▁over
- ▁probably
- ▁guys
- ▁maybe
- ▁still
- ▁cr
- ▁which
- ▁nice
- und
- ▁sure
- ▁l
- ▁off
- ▁la
- ▁cu
- est
- ▁any
- ▁fi
- ▁these
- ▁ra
- ▁went
- ▁things
- ment
- ▁doing
- ▁day
- ▁un
- ▁lo
- ▁da
- ▁only
- igh
- ▁come
- ▁big
- ▁those
- ▁wanna
- ▁bit
- ▁never
- ▁us
- ol
- ▁though
- ▁first
- ive
- ▁their
- ▁let
- ▁start
- ▁his
- ▁four
- ▁le
- ▁eat
- ist
- ▁school
- us
- ▁into
- ▁yep
- uck
- ▁than
- ▁him
- ▁hi
- ▁also
- ▁five
- side
- ▁new
- ▁comp
- ▁cool
- ▁talk
- ▁said
- ▁pro
- ▁r
- ▁always
- ▁ri
- ▁cl
- ▁long
- able
- ▁sc
- ▁gra
- ▁by
- ▁friend
- age
- ▁different
- ▁live
- ▁doesn
- ▁place
- ▁sorry
- ▁will
- ▁feel
- ▁does
- ▁part
- ▁wait
- ▁six
- ▁watch
- ▁anything
- ▁man
- ▁our
- ▁car
- ▁huh
- ▁whatever
- ▁last
- ▁give
- ▁ten
- ▁before
- ▁thought
- ▁after
- ▁game
- ▁card
- ▁fl
- ▁every
- cause
- ▁same
- ▁around
- ▁cook
- ▁week
- ▁hu
- ▁everything
- ▁fine
- ▁many
- ▁qu
- ▁read
- ▁tea
- ough
- ance
- ▁turn
- ▁wow
- ▁fun
- ▁hard
- ▁great
- ▁love
- ▁remember
- ▁twenty
- ▁whole
- ▁happen
- ▁seven
- ▁keep
- ▁food
- ▁most
- j
- ▁might
- ▁thank
- ▁move
- ▁job
- ▁eight
- ▁mu
- ▁sort
- ▁better
- port
- ▁another
- ful
- ▁point
- ▁show
- ▁again
- ▁high
- ize
- ▁house
- ▁home
- ▁person
- ▁old
- ▁end
- ▁through
- ▁pick
- ▁else
- ▁guy
- ▁app
- ▁find
- ▁nine
- ▁hand
- ▁kid
- ▁interesting
- ▁city
- ▁called
- ▁tell
- ▁half
- ▁name
- ▁definitely
- ▁made
- ▁exactly
- ▁came
- ▁wood
- ▁funny
- ▁basically
- ▁count
- ▁usually
- ▁help
- ▁someone
- ▁already
- ▁dunno
- ▁enough
- ction
- ▁own
- ▁weird
- ▁next
- ▁hundred
- ▁small
- ▁money
- ▁couple
- ▁while
- ▁close
- ▁movie
- ▁sometimes
- ▁everyone
- ▁away
- ▁true
- ▁super
- ▁cheese
- ▁class
- ▁night
- ▁life
- ▁leave
- ▁plan
- ▁water
- ▁left
- ▁thirty
- ▁family
- ▁phone
- ▁build
- ▁room
- ▁month
- ▁open
- ▁idea
- ▁second
- ▁dude
- ▁music
- ▁each
- ▁learn
- ▁girl
- ▁together
- ▁under
- ▁run
- ▁chicken
- ▁having
- ▁either
- ▁almost
- ▁crazy
- ▁book
- ▁sauce
- ▁supposed
- ▁course
- ▁speak
- ▁awesome
- ▁anyway
- ▁throw
- ▁finish
- ▁world
- ▁reason
- ▁check
- ▁least
- ▁parents
- ▁everybody
- ▁change
- '&'
- ä
- '#'
- ñ
- â
- é
- ü
- ']'
- q
- î
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram500/bpe.model
non_linguistic_symbols: data/nlsyms.txt
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
aux_ctc_tasks: []
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wavlm_large
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: false
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: false
freq_mask_width_range:
- 0
- 150
num_freq_mask: 4
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.15
num_time_mask: 3
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 128
dropout: 0.2
encoder: transformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d2
normalize_before: true
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
input_layer: embed
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.0
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202301'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
AndrewOgn/IntelasMatcher_homecase
|
AndrewOgn
|
bert
| 12 | 6 |
sentence-transformers
| 0 |
sentence-similarity
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
| false | true | true | 2,138 |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 8 with parameters:
```
{'batch_size': 512, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
petergoldstein/q-FrozenLake-v1-4x4-noSlippery
|
petergoldstein
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 403 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="petergoldstein/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
deprem-ml/distilroberta-tweet-clustering-embeddings
|
deprem-ml
| null | 3 | 0 | null | 0 |
feature-extraction
| false | false | false |
apache-2.0
|
['tr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 297 |
## Topic Modelling için Clustering Embedding'leri
Bu repository'i güncelleyeceğiz. Notebook burada: https://github.com/metekemertas/deprem-intent-classification-tfidf/blob/main/unsupervised_analysis.py
Tweet verisinde ihtiyaç belirlemek (topic modelling) için çıkarılmış embedding'ler bu repo'da.
|
petergoldstein/Taxi-v3-default
|
petergoldstein
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 376 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="petergoldstein/Taxi-v3-default", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
rjac/test1
|
rjac
|
whisper
| 21 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['stakwork/bitcoin-clips-V3']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,035 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Cryptocurrency
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the stakwork/bitcoin-clips-V3 en dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
daripaez/poca-SoccerTwos-1
|
daripaez
| null | 30 | 238 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 844 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: daripaez/poca-SoccerTwos-1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kellyxuanlin/bert-finetuned-squad
|
kellyxuanlin
|
bert
| 12 | 9 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 954 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
HueyNemud/icdar23-entrydetector_plaintext_breaks_indents_left_diff_right_ref
|
HueyNemud
|
camembert
| 11 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,167 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# icdar23-entrydetector_plaintext_breaks_indents_left_diff_right_ref
This model is a fine-tuned version of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0078
- Ebegin: {'precision': 0.9920303605313093, 'recall': 0.9830763444904099, 'f1': 0.9875330562901399, 'number': 2659}
- Eend: {'precision': 0.9958443520967133, 'recall': 0.9850523168908819, 'f1': 0.9904189366898367, 'number': 2676}
- Overall Precision: 0.9939
- Overall Recall: 0.9841
- Overall F1: 0.9890
- Overall Accuracy: 0.9982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.07 | 300 | 0.0314 | 0.9572 | 0.9870 | 0.9719 | 0.9956 |
| 0.1574 | 0.14 | 600 | 0.0145 | 0.9897 | 0.9834 | 0.9866 | 0.9979 |
| 0.1574 | 0.21 | 900 | 0.0098 | 0.9896 | 0.9917 | 0.9907 | 0.9985 |
| 0.0161 | 0.29 | 1200 | 0.0079 | 0.9919 | 0.9921 | 0.9920 | 0.9987 |
| 0.0107 | 0.36 | 1500 | 0.0072 | 0.9895 | 0.9928 | 0.9911 | 0.9986 |
| 0.0107 | 0.43 | 1800 | 0.0116 | 0.9900 | 0.9877 | 0.9888 | 0.9981 |
| 0.0114 | 0.5 | 2100 | 0.0069 | 0.9965 | 0.9898 | 0.9931 | 0.9988 |
| 0.0114 | 0.57 | 2400 | 0.0055 | 0.9955 | 0.9907 | 0.9931 | 0.9989 |
| 0.0082 | 0.64 | 2700 | 0.0051 | 0.9870 | 0.9956 | 0.9913 | 0.9985 |
| 0.0062 | 0.72 | 3000 | 0.0046 | 0.9903 | 0.9957 | 0.9930 | 0.9988 |
| 0.0062 | 0.79 | 3300 | 0.0038 | 0.9957 | 0.9929 | 0.9943 | 0.9990 |
| 0.0051 | 0.86 | 3600 | 0.0038 | 0.9956 | 0.9943 | 0.9949 | 0.9992 |
| 0.0051 | 0.93 | 3900 | 0.0047 | 0.9902 | 0.9942 | 0.9921 | 0.9987 |
| 0.0041 | 1.0 | 4200 | 0.0035 | 0.9979 | 0.9917 | 0.9948 | 0.9991 |
| 0.0029 | 1.07 | 4500 | 0.0036 | 0.9973 | 0.9926 | 0.9949 | 0.9992 |
| 0.0029 | 1.14 | 4800 | 0.0038 | 0.9969 | 0.9916 | 0.9942 | 0.9990 |
| 0.0034 | 1.22 | 5100 | 0.0036 | 0.9953 | 0.9935 | 0.9944 | 0.9991 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
petergoldstein/Taxi-v3-1M
|
petergoldstein
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 371 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="petergoldstein/Taxi-v3-1M", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
thanat/marian-finetuned-kde4-en-to-fr
|
thanat
|
marian
| 10 | 10 |
transformers
| 0 |
text2text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,497 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# thanat/marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the [kde4](https://huggingface.co/datasets/kde4) dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6857
- Validation Loss: 0.8034
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0622 | 0.8817 | 0 |
| 0.7977 | 0.8203 | 1 |
| 0.6857 | 0.8034 | 2 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
SuryaaSeran/bert-base-uncased-finetuned-swag
|
SuryaaSeran
|
bert
| 16 | 15 |
transformers
| 0 |
multiple-choice
| true | false | false |
apache-2.0
| null |
['dream']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,332 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the dream dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0986
- Accuracy: 0.3642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1039 | 1.0 | 3058 | 1.0983 | 0.3779 |
| 1.0995 | 2.0 | 6116 | 1.0986 | 0.3544 |
| 1.1029 | 3.0 | 9174 | 1.0986 | 0.3642 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
yizhangliu/poca-SoccerTwos-v4
|
yizhangliu
| null | 21 | 240 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 847 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: yizhangliu/poca-SoccerTwos-v4
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
coreml/coreml-3DKX
|
coreml
| null | 17 | 0 | null | 2 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['coreml', 'stable-diffusion', 'text-to-image']
| false | true | true | 9,737 |
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).<br>
- Provide the model to an app such as Mochi Diffusion [Github](https://github.com/godly-devotion/MochiDiffusion) - [Discord](https://discord.gg/x2kartzxGv) to generate images.<br>
- `split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
- `original` version is only compatible with CPU & GPU option.<br>
- `vae` tagged files have a vae embedded into the model. Most common are 1.5 or 2.1 others are named accordingly.<br>
- Descriptions are posted as-is from original model source. Not all features and/or results may be available in CoreML format.<br>
# Note: Some models do not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
# H&A 3DKX:
Source(s): [Hugging Face](https://huggingface.co/HavoAloe/3DKX_1.0b) - [CivitAI](https://civitai.com/models/2504/handas-3dkx-11)
<a href="https://discord.gg/havonaloe">
<img src="https://cdn.discordapp.com/attachments/1051410188592226364/1061335270194171944/havo_aloe_banner_copie.jpg" alt="image description" width="768">
</a>
<a href="https://www.patreon.com/aloeNhavo">
<img src="https://cdn.discordapp.com/attachments/1051410188592226364/1061335270483566662/havo_aloe_banner_patreon.jpg" alt="image description" width="768">
</a>
Model name: H&A 3DKX
Model versions: 1.0b, 1.1(latest)
## Changelog:
V1.1: Minor update based on feedback, containing the following fixes:
-“nsfw”, “nudity” , and “erotica” have been trained into the model and work as Negatives to greatly reduce unintended NSFW content.
- CFG can be pushed a bit higher before the images become burnt. As a result, the model can accommodate more complicated prompts now.
- Oversaturated images will be encountered way less often
## Description:
SFW model with limited nsfw capabilities (suggestive nsfw) that is highly versatile for 3D renders.
The model has the particularity of splitting itself into two different well balanced styles.
If you'd like to have your 3D characters have a more "Cartoony" face, you simply start your prompt
with "3d cartoon of", and if you want the classic 3D render style, you write "a 3d render of".
Please check the cheat sheet for prompting tips as the structure of the prompt and negatives used has a huge effect.
Note: Model has an embedded VAE, so do not add one with this model. It will be the best in most cases, and configured for higher resolutions.
## Model has an embedded VAE, do not use an extra one!
If you want to chat with us and join our community visit our discord:
https://discord.gg/CPyqJgXdRG
## Dataset:
- between 140 and 180 pictures of 3D render of all kind
## PromptGuide/Cheat Sheet
[3DKX_1.0b/1.1 Guide](https://docs.google.com/document/d/15pJ3TkbmX3LRoSTNsMYsbetvO7A46L60wVOxIL2ZZ6E/)
## Has a high success rate at:
- sfw portraits, full body poses, close ups, etc
- high versatility in terms of outputs, it isn't locked to perform well on portraits
- Landscapes, cyberpunk, steampunk, natural, scifi, etc
- 2B Nier Automata (Don't ask us why)
- different body types - different ethnicity
- nsfw portraits, full body poses, close ups, etc
## What it "In theory" shouldn't exceed at:
- anything outside the scope of portraits, people, landscapes, game artworks, 3D sculptures, 3D fantasy, 3D film stills, etc
- celebrities
- highly specific animated cartoon characters
- multiple subjects
- highly specific video-game characters
- pornography, genitalia and highly explicit materials
<img width="768px" src="https://cdn.discordapp.com/attachments/1056287982363086930/1056331346177425438/00011-3928902726-A203d20render20of2020epic20portrait20close20shot20of20beautiful20turkish20woman20wearing20with20angelic20feathered20wings20gold20armour20neckline.png">
<img width="768px" src="https://cdn.discordapp.com/attachments/323893037379878912/1056823178846031882/00494-2262985444-3d_render_of_a_sharp_focused_detailed_photo_of_a_super_car_with_iridescent_metallic_color_driving_on_a_midnight_road_multicolo.png">
<img width="768px" src="https://media.discordapp.net/attachments/1056287982363086930/1056387208900268062/00102-971809704-movie_still_of_a_alien_from_mass_effect_wearing_scifi_armor_disney_pixar_animation_3d_render_4k_resolution_very_detail.png">
<img width="768px" src="https://cdn.discordapp.com/attachments/1056287982363086930/1056399456695767151/00143-556286556-3d_render_of_a_cute_simba_from_the_lion_king_disney_pixar_animation_RDR_2_game_render_lion_king_movie_still_very_detailed_4.png">
<img width="768px" src="https://media.discordapp.net/attachments/1056287982363086930/1056385527907110963/00097-3269033961-picture_of_a_handsome_viking_chief_in_his_village_disney_pixar_animation_3d_render_4k_resolution_very_detailed_movie_stil.png?width=645&height=806">
<img width="768px" src="https://cdn.discordapp.com/attachments/1056287982363086930/1056340815011659917/02082-2709311262-A_3d_render_of_a_cute_tiny_little_fluffy_monster_with_googly_eyes_running_in_a_huge_bedroom_antview_bokeh_closeup_highly_det.png">
<img width="768px" src="https://media.discordapp.net/attachments/1051410188592226364/1056636097330942022/02045-172340656-A_3d_render_of_A_mature_woman_with_short_styled_hair_and_wearing_a_colorful_printed_blouse_seated_in_a_cozy_armchair_with_a_wa.png">
<img width="768px" src="https://cdn.discordapp.com/attachments/1051410188592226364/1056636096412389508/02029-172340656-A_3d_cartoon_of_A_mature_woman_with_short_styled_hair_and_wearing_a_colorful_printed_blouse_seated_in_a_cozy_armchair_with_a.png">
<img width="768px" src="https://cdn.discordapp.com/attachments/1051410188592226364/1056636286347247616/01925-2050823061-A_woman_with_short_bobbed_hair_styled_in_a_choppy_textured_look_wearing_a_cyberpunk-inspired_outfit_with_neon_accents_and_boo.png">
<img width="768px" src="https://cdn.discordapp.com/attachments/1051410188592226364/1056636296103198740/01858-1327022461-A_slender_woman_with_pale_skin_short_blonde_hair_and_bright_blue_eyes._She_is_standing_in_a_bright_white_studio_surrounded_by.png">
<img width="768px" src="https://cdn.discordapp.com/attachments/1051410188592226364/1056636414059618406/02107-599009770-A_3d_cartoon_of_a_a_beautiful_spanish_woman_wearing_Kimono_neckline_fine_-_art_photography_cinematic_portrait_shot_8_k_mid.png">
## Use Restrictions
You agree not to use the Model or Derivatives of the Model:
- In any way that violates any applicable national, federal, state, local or international law or regulation;
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
- To generate or disseminate verifiably false information and/or content with the purpose of harming others;
- To generate or disseminate personal identifiable information that can be used to harm an individual;
- To defame, disparage or otherwise harass others;
- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories;
- To provide medical advice and medical results interpretation;
- To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use).
## Important notes:
- This model’s datasets do NOT contain any character that could be remotely described as a child, or underage.
- Our datasets contains no mentions of the artist's name, nor specific styles from any artist whatsoever.
- The creators (Havo and Aloe Vera) will not be held accountable for the way this model is being used or the outputs that any person may generate.
- The purpose of this model isn't to replicate a style, but to provide a useful tool to creators of all kinds to generate 3D related contents
- Be advised that this model can generate explicit material and therefore shouldn't be used in any way to cause harm or produce non-consensual sexual content.
## Conclusion:
We do have limited resources, so our weeks worth of testing cannot realistically encapsulate the full potential of the model. Which is why we're very excited to discover that YOU, the awesome creators will make out of this tool. And for anyone that feels like we're worth a shot, we invite you to please have a look at our Patreon in which you can choose to chip in and support our work.
We have many plans and we'd like to have some more resources that will allow us to work more efficiently and to eventually be able to create models of professional standards. It's our goal !
https://www.patreon.com/aloeNhavo
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.