modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
β | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
β | likes
float64 0
712
β | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingtweets/elonmusk-jack
|
3edad9deb7a90f81eb1cc3e7c70c8d861423280c
|
2022-06-13T04:16:05.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/elonmusk-jack
| 0 | null |
transformers
| 38,100 |
---
language: en
thumbnail: http://www.huggingtweets.com/elonmusk-jack/1655093760817/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529956155937759233/Nyn1HZWF_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1115644092329758721/AFjOr-K8_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & jack</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-jack</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & jack.
| Data | Elon Musk | jack |
| --- | --- | --- |
| Tweets downloaded | 3200 | 3232 |
| Retweets | 147 | 1137 |
| Short tweets | 959 | 832 |
| Tweets kept | 2094 | 1263 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2zwk8y4o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-jack's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/16z5871k) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/16z5871k/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-jack')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/fbinegotiator
|
26007c335e8c3d271bbc26d2371ee7f94997df40
|
2022-06-13T04:22:31.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/fbinegotiator
| 0 | null |
transformers
| 38,101 |
---
language: en
thumbnail: http://www.huggingtweets.com/fbinegotiator/1655094146705/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1312911855187181568/W1hAKDaA_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Christopher Voss</div>
<div style="text-align: center; font-size: 14px;">@fbinegotiator</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Christopher Voss.
| Data | Christopher Voss |
| --- | --- |
| Tweets downloaded | 3235 |
| Retweets | 370 |
| Short tweets | 98 |
| Tweets kept | 2767 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/uat42o9x/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fbinegotiator's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1g9amvgc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1g9amvgc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fbinegotiator')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nestoralvaro/mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t22027_162754.csv__g_mt5_base_L5
|
f2d9b621c3cedb02cda71537ff256fed8acb4ddd
|
2022-06-13T14:19:45.000Z
|
[
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] |
text2text-generation
| false |
nestoralvaro
| null |
nestoralvaro/mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t22027_162754.csv__g_mt5_base_L5
| 0 | null |
transformers
| 38,102 |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t22027_162754.csv__g_mt5_base_L5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t22027_162754.csv__g_mt5_base_L5
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.7722
- Rouge2: 0.0701
- Rougel: 0.772
- Rougelsum: 0.7717
- Gen Len: 6.329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 131773 | nan | 0.7722 | 0.0701 | 0.772 | 0.7717 | 6.329 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
iaanimashaun/opus-mt-en-sw-finetuned-en-to-sw
|
e8058ec9903b036ae58ffb8903d2823feac394e5
|
2022-06-16T06:40:29.000Z
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] |
text2text-generation
| false |
iaanimashaun
| null |
iaanimashaun/opus-mt-en-sw-finetuned-en-to-sw
| 0 | null |
transformers
| 38,103 |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: opus-mt-en-sw-finetuned-en-to-sw
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-sw-finetuned-en-to-sw
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-sw](https://huggingface.co/Helsinki-NLP/opus-mt-en-sw) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 113 | 0.9884 | 50.2226 | 19.0434 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
simecek/humandna_DEBERTA_1epoch
|
01ed779246471e336e6937c26e2c9e02e5666c42
|
2022-06-13T07:06:03.000Z
|
[
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
| false |
simecek
| null |
simecek/humandna_DEBERTA_1epoch
| 0 | null |
transformers
| 38,104 |
Entry not found
|
huggingtweets/demondicekaren
|
f4fb47bb69e9288d601fd6f6c6b6c216798c0d33
|
2022-06-13T07:19:24.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/demondicekaren
| 0 | null |
transformers
| 38,105 |
---
language: en
thumbnail: http://www.huggingtweets.com/demondicekaren/1655104759793/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1488027988075507712/FTIBkQRn_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ππ² || DEMONDICE</div>
<div style="text-align: center; font-size: 14px;">@demondicekaren</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ππ² || DEMONDICE.
| Data | ππ² || DEMONDICE |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 371 |
| Short tweets | 617 |
| Tweets kept | 2258 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3fxxzewl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @demondicekaren's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ow01rap) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ow01rap/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/demondicekaren')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sangcamap/sangcamaptest
|
8a846c2e8b721df6179714595ed7264931ff265f
|
2022-06-13T17:20:22.000Z
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"license:gpl-3.0",
"autotrain_compatible"
] |
question-answering
| false |
sangcamap
| null |
sangcamap/sangcamaptest
| 0 | null |
transformers
| 38,106 |
---
license: gpl-3.0
---
|
huggingtweets/ruinsman
|
1c067ff2fb97b78f5425601e0ad6de2fc38a4b20
|
2022-06-13T09:33:18.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/ruinsman
| 0 | null |
transformers
| 38,107 |
---
language: en
thumbnail: http://www.huggingtweets.com/ruinsman/1655112758889/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1428391928110911499/qWeZuRbL_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ManAmongTheRuins</div>
<div style="text-align: center; font-size: 14px;">@ruinsman</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ManAmongTheRuins.
| Data | ManAmongTheRuins |
| --- | --- |
| Tweets downloaded | 3184 |
| Retweets | 424 |
| Short tweets | 213 |
| Tweets kept | 2547 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3evn1l2w/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ruinsman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/apc372yb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/apc372yb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ruinsman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/salgotrader
|
6392dae0ca80d3d3526ff9352fe879451d352f09
|
2022-06-13T14:46:27.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/salgotrader
| 0 | null |
transformers
| 38,108 |
---
language: en
thumbnail: http://www.huggingtweets.com/salgotrader/1655131582645/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1521075169611112448/S_w82Ewg_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">0xPatrician.eth</div>
<div style="text-align: center; font-size: 14px;">@salgotrader</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 0xPatrician.eth.
| Data | 0xPatrician.eth |
| --- | --- |
| Tweets downloaded | 910 |
| Retweets | 250 |
| Short tweets | 84 |
| Tweets kept | 576 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2f275xqv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @salgotrader's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ljt0uhcw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ljt0uhcw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/salgotrader')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ryo0634/bert-base-log_linear-dependency-0
|
dd79b8943265b08fb085fee23a5efdd7e8720a18
|
2022-06-13T15:00:44.000Z
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
| false |
ryo0634
| null |
ryo0634/bert-base-log_linear-dependency-0
| 0 | null |
transformers
| 38,109 |
Entry not found
|
sdugar/cross-en-de-fr-minilm-384d-sentence-transformer
|
369c22a4907fd3e63e7b4f58b97e310dbb33a4b1
|
2022-06-13T16:51:13.000Z
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers",
"license:mit"
] |
feature-extraction
| false |
sdugar
| null |
sdugar/cross-en-de-fr-minilm-384d-sentence-transformer
| 0 | null |
transformers
| 38,110 |
---
license: mit
---
|
kravchenko/uk-mt5-small-gec
|
3c7ee9608074378b4225944e5a54423e4235dd1b
|
2022-06-13T16:29:10.000Z
|
[
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
kravchenko
| null |
kravchenko/uk-mt5-small-gec
| 0 | null |
transformers
| 38,111 |
Entry not found
|
simecek/DNAMobileBert
|
ce054e936cccefab743d3437df9d74469323efc6
|
2022-06-14T16:23:31.000Z
|
[
"pytorch",
"tensorboard",
"mobilebert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
| false |
simecek
| null |
simecek/DNAMobileBert
| 0 | null |
transformers
| 38,112 |
Entry not found
|
kravchenko/uk-mt5-base-gec
|
b2c55c45d4db1b961faa7b8043d1b175ca7fdee9
|
2022-06-13T16:31:43.000Z
|
[
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
kravchenko
| null |
kravchenko/uk-mt5-base-gec
| 0 | null |
transformers
| 38,113 |
Entry not found
|
kravchenko/uk-mt5-large-gec
|
fc922cf49a155b489e53288137f23e0526916e82
|
2022-06-13T16:39:46.000Z
|
[
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
kravchenko
| null |
kravchenko/uk-mt5-large-gec
| 0 | null |
transformers
| 38,114 |
Entry not found
|
nestoralvaro/mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t22027_162754.csv__g_mt5_base_L2
|
45b541b07e13f1d4fe92b1f5d2fa8c98395fda4e
|
2022-06-14T02:06:07.000Z
|
[
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] |
text2text-generation
| false |
nestoralvaro
| null |
nestoralvaro/mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t22027_162754.csv__g_mt5_base_L2
| 0 | null |
transformers
| 38,115 |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t22027_162754.csv__g_mt5_base_L2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t22027_162754.csv__g_mt5_base_L2
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.0127
- Rouge2: 0.0
- Rougel: 0.0128
- Rougelsum: 0.0129
- Gen Len: 6.329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 131773 | nan | 0.0127 | 0.0 | 0.0128 | 0.0129 | 6.329 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mailenpellegrino/transformerRuperta
|
baa9279e6e95b66c4e613eeb56f966addd5f3d07
|
2022-06-13T18:19:40.000Z
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
| false |
mailenpellegrino
| null |
mailenpellegrino/transformerRuperta
| 0 | null |
transformers
| 38,116 |
Entry not found
|
mcalcagno/mcd101-finedtuned-beto-xnli
|
77d9dc44315d2cab074e3678484a78cffc74a712
|
2022-06-13T18:23:45.000Z
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
| false |
mcalcagno
| null |
mcalcagno/mcd101-finedtuned-beto-xnli
| 0 | null |
transformers
| 38,117 |
Entry not found
|
micamorales/roberta-NLI-simple
|
ef1e8e206424f19266285517f5b6deb234905671
|
2022-06-13T18:20:55.000Z
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
| false |
micamorales
| null |
micamorales/roberta-NLI-simple
| 0 | null |
transformers
| 38,118 |
Entry not found
|
micamorales/roberta-NLI-simple2
|
e791abf9d525beac9895c2ca65d63f3415e24ad0
|
2022-06-13T18:27:40.000Z
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
| false |
micamorales
| null |
micamorales/roberta-NLI-simple2
| 0 | null |
transformers
| 38,119 |
Entry not found
|
income/jpq-question_encoder-base-msmarco-contriever
|
09594e60bc9934fc8ecd0f123c14304495ebe83c
|
2022-06-13T21:00:45.000Z
|
[
"pytorch",
"bert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-question_encoder-base-msmarco-contriever
| 0 | null |
transformers
| 38,120 |
---
license: apache-2.0
---
|
jacklin/DeLADE-CLS-P
|
4f82e5ddf097de89f4f866f47c3189245d50ff0a
|
2022-06-13T21:42:41.000Z
|
[
"pytorch",
"arxiv:2112.04666"
] | null | false |
jacklin
| null |
jacklin/DeLADE-CLS-P
| 0 | null | null | 38,121 |
This model, (DeLADE+[CLS])+, is trained by fusing neural lexical and semantic components in single transformer using DistilBERT as a backbone using hard negative mining and knowledge distillation with ColBERT teacher, which is detailed in the below paper.
*[A Dense Representation Framework for Lexical and Semantic Matching](https://arxiv.org/pdf/2112.04666.pdf)* Sheng-Chieh Lin and Jimmy Lin.
You can find the usage of the model in our [DHR repo](https://github.com/jacklin64/DHR): (1) [Inference on MSMARCO Passage Ranking](https://github.com/castorini/DHR/blob/main/docs/msmarco-passage-train-eval.md); (2) [Inference on BEIR datasets](https://github.com/castorini/DHR/blob/main/docs/beir-eval.md).
|
micamorales/roberta-NLI-abs2
|
c5135ea951469f365e01c5172500c5174ba3469d
|
2022-06-13T21:12:14.000Z
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
| false |
micamorales
| null |
micamorales/roberta-NLI-abs2
| 0 | null |
transformers
| 38,122 |
Entry not found
|
huggingtweets/honiemun
|
15f9f1e720ba63dc0e22aea866d434e8bebf03ce
|
2022-06-13T23:11:55.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/honiemun
| 0 | null |
transformers
| 38,123 |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1509372264424296448/HVPI1lQu_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ππ°π―πͺπ¦ β‘</div>
<div style="text-align: center; font-size: 14px;">@honiemun</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ππ°π―πͺπ¦ β‘.
| Data | ππ°π―πͺπ¦ β‘ |
| --- | --- |
| Tweets downloaded | 3207 |
| Retweets | 231 |
| Short tweets | 381 |
| Tweets kept | 2595 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/teqt0sk7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @honiemun's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3bqoay71) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3bqoay71/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/honiemun')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
geronimo/RobertaBNE2
|
f5069c098d5c485be25b565ec11f2d934dee8b8e
|
2022-06-14T20:35:43.000Z
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
| false |
geronimo
| null |
geronimo/RobertaBNE2
| 0 | null |
transformers
| 38,124 |
Entry not found
|
huggingtweets/horse_js
|
b2887f889e51e8124f3efe8b8133913a17170037
|
2022-06-14T05:59:52.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/horse_js
| 0 | null |
transformers
| 38,125 |
---
language: en
thumbnail: http://www.huggingtweets.com/horse_js/1655186387828/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1844491454/horse-js_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Horse JS</div>
<div style="text-align: center; font-size: 14px;">@horse_js</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Horse JS.
| Data | Horse JS |
| --- | --- |
| Tweets downloaded | 3200 |
| Retweets | 1 |
| Short tweets | 163 |
| Tweets kept | 3036 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ucaep55/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @horse_js's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/213qs19z) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/213qs19z/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/horse_js')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
winson/custom-resnet50d
|
a5b507c389136f5ef50754d20fceff3086dbec1c
|
2022-06-14T09:34:53.000Z
|
[
"pytorch",
"resnet",
"transformers"
] | null | false |
winson
| null |
winson/custom-resnet50d
| 0 | null |
transformers
| 38,126 |
Entry not found
|
mshoaibsarwar/pegasus-pdm-news
|
74fd9c4784ec251ab5ad7210992a8587e1da3df8
|
2022-06-14T14:54:33.000Z
|
[
"pytorch",
"pegasus",
"text2text-generation",
"unk",
"dataset:mshoaibsarwar/autotrain-data-pdm-news",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] |
text2text-generation
| false |
mshoaibsarwar
| null |
mshoaibsarwar/pegasus-pdm-news
| 0 | 1 |
transformers
| 38,127 | |
saiharsha/vit-base-beans
|
9718aa042724a7a57613c543e87e69e6613decb6
|
2022-06-14T09:54:53.000Z
|
[
"pytorch",
"vit",
"image-classification",
"dataset:beans",
"transformers",
"vision",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] |
image-classification
| false |
saiharsha
| null |
saiharsha/vit-base-beans
| 0 | null |
transformers
| 38,128 |
---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit-base-beans
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9699248120300752
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1824
- Accuracy: 0.9699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.672 | 1.0 | 44 | 0.5672 | 0.9398 |
| 0.411 | 2.0 | 88 | 0.3027 | 0.9699 |
| 0.2542 | 3.0 | 132 | 0.2078 | 0.9699 |
| 0.1886 | 4.0 | 176 | 0.1882 | 0.9699 |
| 0.1931 | 5.0 | 220 | 0.1824 | 0.9699 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Waleed-bin-Qamar/ConvNext-For-Covid-Classification
|
41e33d9daf4481ca75aec8a99986c0b3dcd97f43
|
2022-06-14T11:14:27.000Z
|
[
"pytorch",
"convnext",
"image-classification",
"transformers",
"license:afl-3.0"
] |
image-classification
| false |
Waleed-bin-Qamar
| null |
Waleed-bin-Qamar/ConvNext-For-Covid-Classification
| 0 | null |
transformers
| 38,129 |
---
license: afl-3.0
---
|
sdugar/cross-en-de-fr-xlmr-768d-sentence-transformer
|
c2c60c8b610be019fff69ce80f5b4f19d0d59bd4
|
2022-06-15T06:40:04.000Z
|
[
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers",
"license:mit"
] |
feature-extraction
| false |
sdugar
| null |
sdugar/cross-en-de-fr-xlmr-768d-sentence-transformer
| 0 | null |
transformers
| 38,130 |
---
license: mit
---
|
lmqg/t5-small-squadshifts-new_wiki
|
a0b198126757cf64b66231f6227691a9d49cf705
|
2022-06-14T10:33:35.000Z
|
[
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
lmqg
| null |
lmqg/t5-small-squadshifts-new_wiki
| 0 | null |
transformers
| 38,131 |
Entry not found
|
lmqg/t5-small-squadshifts-amazon
|
998ff2b330fe510b89b261a0a390a844ba7bf2bf
|
2022-06-14T10:38:29.000Z
|
[
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
lmqg
| null |
lmqg/t5-small-squadshifts-amazon
| 0 | null |
transformers
| 38,132 |
Entry not found
|
sdugar/test
|
9d36bbf943618db48ca68cbe2f877a9783cd97e3
|
2022-06-14T10:45:34.000Z
|
[
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers",
"license:mit"
] |
feature-extraction
| false |
sdugar
| null |
sdugar/test
| 0 | null |
transformers
| 38,133 |
---
license: mit
---
|
huggingtweets/iamekagra
|
161c2257e78b20d2d4946fc1e562d4576caab581
|
2022-06-14T11:39:21.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/iamekagra
| 0 | null |
transformers
| 38,134 |
---
language: en
thumbnail: http://www.huggingtweets.com/iamekagra/1655206726797/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1436804952119132162/47MeY1N1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ekagra Sinha</div>
<div style="text-align: center; font-size: 14px;">@iamekagra</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ekagra Sinha.
| Data | Ekagra Sinha |
| --- | --- |
| Tweets downloaded | 487 |
| Retweets | 69 |
| Short tweets | 69 |
| Tweets kept | 349 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ceh71sg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iamekagra's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/rf0li8b0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/rf0li8b0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/iamekagra')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/duckybhai
|
47a05932f28c79ee9ecbc4de7fb36b24681be3f7
|
2022-06-14T11:44:57.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/duckybhai
| 0 | null |
transformers
| 38,135 |
---
language: en
thumbnail: http://www.huggingtweets.com/duckybhai/1655207092084/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1494814887410909195/1_cZ1OGN_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Saad Ur Rehman</div>
<div style="text-align: center; font-size: 14px;">@duckybhai</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Saad Ur Rehman.
| Data | Saad Ur Rehman |
| --- | --- |
| Tweets downloaded | 2045 |
| Retweets | 158 |
| Short tweets | 233 |
| Tweets kept | 1654 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/e0w83ypv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @duckybhai's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2tc4ee4o) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2tc4ee4o/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/duckybhai')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/imrankhanpti
|
8ba96dcb73b6110e22f62440d3a7d89b430efb07
|
2022-06-14T12:28:35.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/imrankhanpti
| 0 | null |
transformers
| 38,136 |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1526278959746392069/t3sMBz94_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Imran Khan</div>
<div style="text-align: center; font-size: 14px;">@imrankhanpti</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Imran Khan.
| Data | Imran Khan |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 28 |
| Short tweets | 8 |
| Tweets kept | 3214 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2s8u3tpn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @imrankhanpti's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/g9j8i8kg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/g9j8i8kg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/imrankhanpti')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mgtoxd/wav2vec2test
|
3331578689a38133a54cc8071249bc25e5979e0f
|
2022-06-14T14:48:44.000Z
|
[
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] |
automatic-speech-recognition
| false |
mgtoxd
| null |
mgtoxd/wav2vec2test
| 0 | null |
transformers
| 38,137 |
Entry not found
|
huggingtweets/lukaesch
|
56b24ee7b0e767631d18882d7f8ad42e835ce688
|
2022-06-14T16:33:23.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/lukaesch
| 0 | null |
transformers
| 38,138 |
---
language: en
thumbnail: http://www.huggingtweets.com/lukaesch/1655224388749/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/635525362471038977/hSfNBIhy_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lukas (SoTrusty.com) π</div>
<div style="text-align: center; font-size: 14px;">@lukaesch</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lukas (SoTrusty.com) π.
| Data | Lukas (SoTrusty.com) π |
| --- | --- |
| Tweets downloaded | 1051 |
| Retweets | 326 |
| Short tweets | 60 |
| Tweets kept | 665 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/v5uo1xq4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lukaesch's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/31s1ya5a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/31s1ya5a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lukaesch')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ndaheim/cima_joint_model
|
ad6c1a6e5819c196fd90f65004936456e248c5a0
|
2022-06-14T17:13:04.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
ndaheim
| null |
ndaheim/cima_joint_model
| 0 | null |
transformers
| 38,139 |
Entry not found
|
mcalcagno/mcd101-finedtuned-roberta-xnli
|
349dd648e1717aada151734ee8c90a9c6b6881ac
|
2022-06-14T17:41:58.000Z
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
| false |
mcalcagno
| null |
mcalcagno/mcd101-finedtuned-roberta-xnli
| 0 | null |
transformers
| 38,140 |
Entry not found
|
kravchenko/uk-mt5-base-gec-tokenized
|
381c5aa038dd1d5903dba8ec77f0f0632d29020a
|
2022-06-14T20:31:16.000Z
|
[
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
kravchenko
| null |
kravchenko/uk-mt5-base-gec-tokenized
| 0 | null |
transformers
| 38,141 |
Entry not found
|
nateraw/koala-panda-wombat
|
3d16d91577b4cc716027007254bf9bb99f384cd2
|
2022-06-14T20:31:04.000Z
|
[
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] |
image-classification
| false |
nateraw
| null |
nateraw/koala-panda-wombat
| 0 | null |
transformers
| 38,142 |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: koala-panda-wombat
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9850746393203735
---
# koala-panda-wombat
Autogenerated by HuggingPicsπ€πΌοΈ
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### koala

#### panda

#### wombat

|
geronimo/RobertaBNE23
|
92fde9cf4a14aecdeed8ed1f606703a5af5d974c
|
2022-06-14T20:53:46.000Z
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
| false |
geronimo
| null |
geronimo/RobertaBNE23
| 0 | null |
transformers
| 38,143 |
Entry not found
|
huggingtweets/rangersfc
|
f6706468764c53493cc056347049aa39a7aaee7f
|
2022-06-14T20:58:46.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/rangersfc
| 0 | null |
transformers
| 38,144 |
---
language: en
thumbnail: http://www.huggingtweets.com/rangersfc/1655240322192/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1513529336107839491/OQuphidQ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rangers Football Club</div>
<div style="text-align: center; font-size: 14px;">@rangersfc</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rangers Football Club.
| Data | Rangers Football Club |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 315 |
| Short tweets | 338 |
| Tweets kept | 2597 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3150wqc2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rangersfc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3bzvo1hp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3bzvo1hp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rangersfc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mcalcagno/mcd101-finedtuned-recognaibert-xnli
|
8ea8d7667ff2ccbf37498e072aaa09fde25376ce
|
2022-06-14T22:14:24.000Z
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
| false |
mcalcagno
| null |
mcalcagno/mcd101-finedtuned-recognaibert-xnli
| 0 | null |
transformers
| 38,145 |
Entry not found
|
lmqg/t5-base-squadshifts-new_wiki
|
13c68e3c881f0676a961795cbc5a66ca7a282407
|
2022-06-15T00:00:52.000Z
|
[
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
lmqg
| null |
lmqg/t5-base-squadshifts-new_wiki
| 0 | null |
transformers
| 38,146 |
Entry not found
|
lmqg/t5-base-squadshifts-nyt
|
d79e5f0cf6d7039b2fc7864696a61f63d06e3ceb
|
2022-06-15T00:03:05.000Z
|
[
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
lmqg
| null |
lmqg/t5-base-squadshifts-nyt
| 0 | null |
transformers
| 38,147 |
Entry not found
|
steven123/Teeth_B
|
d5e5b89568da3988b303885b94dbbb60534c3177
|
2022-06-15T00:31:50.000Z
|
[
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] |
image-classification
| false |
steven123
| null |
steven123/Teeth_B
| 0 | null |
transformers
| 38,148 |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Teeth_B
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6800000071525574
---
# Teeth_B
Autogenerated by HuggingPicsπ€πΌοΈ
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Good Teeth

#### Missing Teeth

#### Rotten Teeth

|
phunc/t5-small-finetuned-xsum
|
c84b0833663721b8db58392fee40d5557239329e
|
2022-06-15T07:18:50.000Z
|
[
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
phunc
| null |
phunc/t5-small-finetuned-xsum
| 0 | null |
transformers
| 38,149 |
Entry not found
|
mgtoxd/tsttst
|
e320939176933d28c32d6af787ea32f21fbd7307
|
2022-06-15T13:19:58.000Z
|
[
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] |
automatic-speech-recognition
| false |
mgtoxd
| null |
mgtoxd/tsttst
| 0 | null |
transformers
| 38,150 |
Entry not found
|
shurafa16/opus-mt-ar-en-finetuned-ar-to-en
|
f4e3d1bb1a8662f0e9d73ef14413eb7af5c71403
|
2022-06-18T14:11:53.000Z
|
[
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:news_commentary",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] |
text2text-generation
| false |
shurafa16
| null |
shurafa16/opus-mt-ar-en-finetuned-ar-to-en
| 0 | null |
transformers
| 38,151 |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- news_commentary
metrics:
- bleu
model-index:
- name: opus-mt-ar-en-finetuned-ar-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: news_commentary
type: news_commentary
args: ar-en
metrics:
- name: Bleu
type: bleu
value: 32.8872
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ar-en-finetuned-ar-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on the news_commentary dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6933
- Bleu: 32.8872
- Gen Len: 56.084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 188 | 0.7407 | 30.7259 | 56.296 |
| No log | 2.0 | 376 | 0.6927 | 32.2038 | 58.602 |
| 0.8066 | 3.0 | 564 | 0.6898 | 33.1091 | 57.72 |
| 0.8066 | 4.0 | 752 | 0.6925 | 33.0842 | 56.574 |
| 0.8066 | 5.0 | 940 | 0.6933 | 32.8872 | 56.084 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
winson/distilbert-base-uncased-finetuned-imdb-accelerate
|
e671d13fce19a8e976e636af2d76915e8e638bcb
|
2022-06-25T13:10:32.000Z
|
[
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
| false |
winson
| null |
winson/distilbert-base-uncased-finetuned-imdb-accelerate
| 0 | null |
transformers
| 38,152 |
Entry not found
|
flyswot/test2
|
1a1a1386da9dea81029e16071651cda3abeabb1c
|
2022-06-15T15:49:17.000Z
|
[
"pytorch",
"convnext",
"image-classification",
"transformers",
"generated_from_trainer",
"model-index"
] |
image-classification
| false |
flyswot
| null |
flyswot/test2
| 0 | null |
transformers
| 38,153 |
---
tags:
- generated_from_trainer
model-index:
- name: test2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test2
This model is a fine-tuned version of [flyswot/convnext-tiny-224_flyswot](https://huggingface.co/flyswot/convnext-tiny-224_flyswot) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.1 | 23 | 0.1128 | 0.9787 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
xin811/dummy-t5-small-finetuned-en-zh
|
8f1a83be43f853afc97dee8f3ca39c5b7ac59077
|
2022-06-15T13:47:51.000Z
|
[
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
xin811
| null |
xin811/dummy-t5-small-finetuned-en-zh
| 0 | null |
transformers
| 38,154 |
Entry not found
|
ouiame/T5_mlsum
|
795d06f64dd84bb086ffee897152ab58573a2754
|
2022-06-16T05:31:30.000Z
|
[
"pytorch",
"mt5",
"text2text-generation",
"fr",
"dataset:ouiame/autotrain-data-trainproject",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] |
text2text-generation
| false |
ouiame
| null |
ouiame/T5_mlsum
| 0 | null |
transformers
| 38,155 |
---
tags: autotrain
language: fr
widget:
- text: "I love AutoTrain π€"
datasets:
- ouiame/autotrain-data-trainproject
co2_eq_emissions: 976.8219757938544
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 985232789
- CO2 Emissions (in grams): 976.8219757938544
## Validation Metrics
- Loss: 1.7047555446624756
- Rouge1: 20.2108
- Rouge2: 7.8633
- RougeL: 16.9554
- RougeLsum: 17.3178
- Gen Len: 18.9874
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ouiame/autotrain-trainproject-985232789
```
|
income/jpq-gpl-dbpedia-entity-document_encoder-base-msmarco-distilbert-tas-b
|
71ba1b044acf7b842dfbe2cfd91b971a11a28edd
|
2022-06-15T17:08:10.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-gpl-dbpedia-entity-document_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,156 |
---
license: apache-2.0
---
|
income/jpq-gpl-fever-question_encoder-base-msmarco-distilbert-tas-b
|
e07b38ac708d8146775fa5be19b8d14ad72f85c0
|
2022-06-15T17:08:46.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-gpl-fever-question_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,157 |
---
license: apache-2.0
---
|
income/jpq-gpl-fever-document_encoder-base-msmarco-distilbert-tas-b
|
122c12b86bcec5e0ae27977a6e40e0eea55d25b1
|
2022-06-15T17:10:19.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-gpl-fever-document_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,158 |
---
license: apache-2.0
---
|
income/jpq-gpl-hotpotqa-document_encoder-base-msmarco-distilbert-tas-b
|
a9af1edeea3c5e8ee61edc9d9a0d3754e58073a6
|
2022-06-15T17:18:18.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-gpl-hotpotqa-document_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,159 |
---
license: apache-2.0
---
|
income/jpq-gpl-quora-document_encoder-base-msmarco-distilbert-tas-b
|
9764717bcaf6998087032cb7fd6eab8438eca968
|
2022-06-15T17:32:42.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-gpl-quora-document_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,160 |
---
license: apache-2.0
---
|
income/jpq-gpl-robust04-question_encoder-base-msmarco-distilbert-tas-b
|
2bc8b96125b944fb76c9a0c03ef632320549ed27
|
2022-06-15T17:33:15.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-gpl-robust04-question_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,161 |
---
license: apache-2.0
---
|
huggingtweets/_mohamads
|
656fc6298ab310b3d7aacac55af0f5c5b31da3f9
|
2022-06-15T17:37:47.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/_mohamads
| 0 | null |
transformers
| 38,162 |
---
language: en
thumbnail: http://www.huggingtweets.com/_mohamads/1655314541919/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1522920330960027648/Z5piAxnG_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">𧬠Ω
ΨΩ
Ψ― Ψ§ΩΨ²ΩΨ±Ψ§ΩΩ</div>
<div style="text-align: center; font-size: 14px;">@_mohamads</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 𧬠Ω
ΨΩ
Ψ― Ψ§ΩΨ²ΩΨ±Ψ§ΩΩ.
| Data | 𧬠Ω
ΨΩ
Ψ― Ψ§ΩΨ²ΩΨ±Ψ§ΩΩ |
| --- | --- |
| Tweets downloaded | 1108 |
| Retweets | 75 |
| Short tweets | 90 |
| Tweets kept | 943 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/y8wg10zm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_mohamads's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1jm1spua) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1jm1spua/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/_mohamads')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
income/jpq-gpl-robust04-document_encoder-base-msmarco-distilbert-tas-b
|
37ae74036461e78d70a848e1cfeb8f1fe8101986
|
2022-06-15T17:34:22.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-gpl-robust04-document_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,163 |
---
license: apache-2.0
---
|
income/jpq-gpl-scifact-question_encoder-base-msmarco-distilbert-tas-b
|
46eb424a137ad025c67fd70f1db63a3660a1db32
|
2022-06-15T17:37:16.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-gpl-scifact-question_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,164 |
---
license: apache-2.0
---
|
income/jpq-gpl-signal1m-question_encoder-base-msmarco-distilbert-tas-b
|
2a1b94cf078a357eb92a632fcc63e0e292fed97e
|
2022-06-15T17:39:14.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-gpl-signal1m-question_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,165 |
---
license: apache-2.0
---
|
income/jpq-gpl-signal1m-document_encoder-base-msmarco-distilbert-tas-b
|
60c4c3e53c8a30907b499fa224be31e3f5de5d0d
|
2022-06-15T17:42:28.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-gpl-signal1m-document_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,166 |
---
license: apache-2.0
---
|
income/jpq-gpl-trec-covid-document_encoder-base-msmarco-distilbert-tas-b
|
c58b63cbb75783de7594fc08909227961a13836f
|
2022-06-15T17:43:34.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-gpl-trec-covid-document_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,167 |
---
license: apache-2.0
---
|
lmqg/t5-large-squadshifts-new_wiki
|
191bb8eb51e2138b616da80b695c066888351ca3
|
2022-06-16T03:47:40.000Z
|
[
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
lmqg
| null |
lmqg/t5-large-squadshifts-new_wiki
| 0 | null |
transformers
| 38,168 |
Entry not found
|
kcarnold/inquisitive2
|
d90e688bfa08040454d6c785b43f75c83a02f7e1
|
2022-06-15T19:55:47.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] |
text2text-generation
| false |
kcarnold
| null |
kcarnold/inquisitive2
| 0 | null |
transformers
| 38,169 |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: inquisitive2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# inquisitive2
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0
- Datasets 2.3.0
- Tokenizers 0.12.1
|
liux3790/autotrain-journals-covid-990032813
|
38860543b33056598fc3ee233170c8525c00aa2c
|
2022-06-15T19:09:50.000Z
|
[
"pytorch",
"bert",
"text-classification",
"en",
"dataset:liux3790/autotrain-data-journals-covid",
"transformers",
"autotrain",
"co2_eq_emissions"
] |
text-classification
| false |
liux3790
| null |
liux3790/autotrain-journals-covid-990032813
| 0 | null |
transformers
| 38,170 | |
huggingtweets/yemeen
|
6961464c661d0d1f22c778196c8841934e78f4fe
|
2022-06-15T21:27:04.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/yemeen
| 0 | null |
transformers
| 38,171 |
---
language: en
thumbnail: http://www.huggingtweets.com/yemeen/1655328324400/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1438226079030947845/pwH4SUlU_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ππππππ</div>
<div style="text-align: center; font-size: 14px;">@yemeen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ππππππ.
| Data | ππππππ |
| --- | --- |
| Tweets downloaded | 2911 |
| Retweets | 1038 |
| Short tweets | 198 |
| Tweets kept | 1675 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3it77r2s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yemeen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/39fvs51l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/39fvs51l/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/yemeen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
income/jpq-gpl-trec-news-document_encoder-base-msmarco-distilbert-tas-b
|
cf589b23a8e9e94e3d3e1b67d0db05ef0e307a7e
|
2022-06-15T21:53:59.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-gpl-trec-news-document_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,172 |
---
license: apache-2.0
---
|
income/jpq-gpl-webis-touche2020-question_encoder-base-msmarco-distilbert-tas-b
|
ddbc62a6fa654a98930b40ef94842c93621ca826
|
2022-06-15T21:54:42.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-gpl-webis-touche2020-question_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,173 |
---
license: apache-2.0
---
|
income/jpq-genq-arguana-document_encoder-base-msmarco-distilbert-tas-b
|
858475f3c91184a9c8fa12c301a45a840673350d
|
2022-06-15T21:58:21.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-genq-arguana-document_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,174 |
---
license: apache-2.0
---
|
income/jpq-genq-trec-news-question_encoder-base-msmarco-distilbert-tas-b
|
c0fe5f6212f26c6018d42a01c11197db31defce7
|
2022-06-15T21:58:49.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-genq-trec-news-question_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,175 |
---
license: apache-2.0
---
|
income/jpq-genq-trec-news-document_encoder-base-msmarco-distilbert-tas-b
|
dec697141e0122247c241a4eda5e6627c83f678d
|
2022-06-15T21:59:20.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-genq-trec-news-document_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,176 |
---
license: apache-2.0
---
|
income/jpq-genq-fever-document_encoder-base-msmarco-distilbert-tas-b
|
3e1341cd91621512b4b4c8fc3d972316db31ff7c
|
2022-06-15T22:05:11.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-genq-fever-document_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,177 |
---
license: apache-2.0
---
|
income/jpq-genq-fiqa-question_encoder-base-msmarco-distilbert-tas-b
|
c05f1995f342847e4125f9e9a9974002902ae1b5
|
2022-06-15T22:06:17.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-genq-fiqa-question_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,178 |
---
license: apache-2.0
---
|
income/jpq-genq-nq-document_encoder-base-msmarco-distilbert-tas-b
|
ecf3a134aa1a6bf4e764b95d2b5eb520cb95072b
|
2022-06-15T22:33:31.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-genq-nq-document_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,179 |
---
license: apache-2.0
---
|
income/jpq-genq-robust04-question_encoder-base-msmarco-distilbert-tas-b
|
f3eb85c1793891857737857bac267bd99d5f54ea
|
2022-06-15T22:47:31.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-genq-robust04-question_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,180 |
---
license: apache-2.0
---
|
income/jpq-genq-robust04-document_encoder-base-msmarco-distilbert-tas-b
|
e4fb83af6aa9d64099ce7258e71721c98d9f1996
|
2022-06-15T22:48:04.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-genq-robust04-document_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,181 |
---
license: apache-2.0
---
|
income/jpq-genq-scifact-question_encoder-base-msmarco-distilbert-tas-b
|
3b94da5682b523e5928d982e7d8667e49f4b0cd2
|
2022-06-15T22:49:37.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-genq-scifact-question_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,182 |
---
license: apache-2.0
---
|
huggingtweets/hotdogsladies
|
c82eaea6774869d6b5e611a36fa717254b8504ea
|
2022-06-15T23:01:56.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/hotdogsladies
| 0 | null |
transformers
| 38,183 |
---
language: en
thumbnail: http://www.huggingtweets.com/hotdogsladies/1655334112277/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1474526156430798849/0Z_zfYqH_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Merlin Mann</div>
<div style="text-align: center; font-size: 14px;">@hotdogsladies</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Merlin Mann.
| Data | Merlin Mann |
| --- | --- |
| Tweets downloaded | 314 |
| Retweets | 41 |
| Short tweets | 48 |
| Tweets kept | 225 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/epnyc8a1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hotdogsladies's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3bjnvmjn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3bjnvmjn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hotdogsladies')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/skysports
|
253cd8e09402a83c171443967cdeb17a363a1bbd
|
2022-06-15T23:05:03.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/skysports
| 0 | null |
transformers
| 38,184 |
---
language: en
thumbnail: http://www.huggingtweets.com/skysports/1655334298376/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1483397012657688577/19JEENoX_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Sky Sports</div>
<div style="text-align: center; font-size: 14px;">@skysports</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Sky Sports.
| Data | Sky Sports |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 720 |
| Short tweets | 21 |
| Tweets kept | 2509 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3m4jcaji/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @skysports's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/4psw7x27) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/4psw7x27/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/skysports')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kravchenko/uk-mt5-large-gec-tokenized
|
ea3448561272e1fcaba7590d084c3f6c7b2760dd
|
2022-06-15T23:32:14.000Z
|
[
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
kravchenko
| null |
kravchenko/uk-mt5-large-gec-tokenized
| 0 | null |
transformers
| 38,185 |
Entry not found
|
huggingtweets/pronewchaos
|
af96dbb4e11e39a7207e173773f60c1f5395ff1a
|
2022-06-16T04:13:17.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/pronewchaos
| 0 | null |
transformers
| 38,186 |
---
language: en
thumbnail: http://www.huggingtweets.com/pronewchaos/1655352793305/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1519208550865653760/gxiNIWdv_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Saitoshi Nanomoto πβοΈπ₯</div>
<div style="text-align: center; font-size: 14px;">@pronewchaos</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Saitoshi Nanomoto πβοΈπ₯.
| Data | Saitoshi Nanomoto πβοΈπ₯ |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 18 |
| Short tweets | 617 |
| Tweets kept | 2615 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3b2f6bkt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pronewchaos's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1lho9s4n) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1lho9s4n/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pronewchaos')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/acai28
|
5bf8bf83d7bc931c8c5fa616841d2c72fb0da05d
|
2022-06-16T03:39:49.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/acai28
| 0 | null |
transformers
| 38,187 |
---
language: en
thumbnail: http://www.huggingtweets.com/acai28/1655350773093/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1527251112604184576/3dKVjGwK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">alec</div>
<div style="text-align: center; font-size: 14px;">@acai28</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from alec.
| Data | alec |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 165 |
| Short tweets | 488 |
| Tweets kept | 2592 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/rd31m5h3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @acai28's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/w8y3ix5h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/w8y3ix5h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/acai28')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jhliu/ClinicalNoteBERT-base-uncased-NTD-MIMIC-segment
|
4ca0f1b96c50615052cc0279ad29f3d7113a5828
|
2022-06-16T04:22:15.000Z
|
[
"pytorch",
"bert",
"transformers"
] | null | false |
jhliu
| null |
jhliu/ClinicalNoteBERT-base-uncased-NTD-MIMIC-segment
| 0 | null |
transformers
| 38,188 |
Entry not found
|
Rakesh111/hindi_model
|
8ef93aec6d2f3077a04317f7bc2b5853f91371b1
|
2022-06-16T07:05:02.000Z
|
[
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] |
automatic-speech-recognition
| false |
Rakesh111
| null |
Rakesh111/hindi_model
| 0 | null |
transformers
| 38,189 | |
sayanmandal/t5-small_6_3-hi_en-en_mix
|
2d0a952450dbe8674f65f88501c324ed5dc254ed
|
2022-06-16T14:54:24.000Z
|
[
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
sayanmandal
| null |
sayanmandal/t5-small_6_3-hi_en-en_mix
| 0 | null |
transformers
| 38,190 |
Entry not found
|
huggingtweets/minusgn
|
053884b36a36f6cbc3074522f8a3cd110a93ba1a
|
2022-06-16T09:01:01.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/minusgn
| 0 | null |
transformers
| 38,191 |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1081285419512127488/Mkb9FgN3_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Isak Vik</div>
<div style="text-align: center; font-size: 14px;">@minusgn</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Isak Vik.
| Data | Isak Vik |
| --- | --- |
| Tweets downloaded | 3222 |
| Retweets | 190 |
| Short tweets | 550 |
| Tweets kept | 2482 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1dy32g00/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @minusgn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3njlvz02) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3njlvz02/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/minusgn')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ndaheim/cima_ungrounded_joint_model
|
8ad164dd0f6f7712080f614c36469cff0f476895
|
2022-06-16T09:29:01.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
ndaheim
| null |
ndaheim/cima_ungrounded_joint_model
| 0 | null |
transformers
| 38,192 |
Entry not found
|
anuragiiser/convnext-tiny-finetuned-mri
|
d5dcdeca0dffcce0ba8e063ffcad59affeb32598
|
2022-06-28T09:53:31.000Z
|
[
"pytorch",
"convnext",
"image-classification",
"transformers"
] |
image-classification
| false |
anuragiiser
| null |
anuragiiser/convnext-tiny-finetuned-mri
| 0 | null |
transformers
| 38,193 | |
philmunz/poc_dl
|
cc4b3d77874b7deee3980227bec6ec977d699018
|
2022-06-16T14:32:21.000Z
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
| false |
philmunz
| null |
philmunz/poc_dl
| 0 | null |
transformers
| 38,194 |
Entry not found
|
huggingtweets/basilhalperin-ben_golub-tylercowen
|
9bb4e000b2208d21132bba7379a15f14d32051a3
|
2022-06-16T17:09:13.000Z
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] |
text-generation
| false |
huggingtweets
| null |
huggingtweets/basilhalperin-ben_golub-tylercowen
| 0 | null |
transformers
| 38,195 |
---
language: en
thumbnail: http://www.huggingtweets.com/basilhalperin-ben_golub-tylercowen/1655399323629/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1483290763056320512/oILN7yPo_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1043847779355897857/xyZk8v-m_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1284936824075550723/ix2eGZd7_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">tylercowen & Basil Halperin & Ben Golub πΊπ¦</div>
<div style="text-align: center; font-size: 14px;">@basilhalperin-ben_golub-tylercowen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from tylercowen & Basil Halperin & Ben Golub πΊπ¦.
| Data | tylercowen | Basil Halperin | Ben Golub πΊπ¦ |
| --- | --- | --- | --- |
| Tweets downloaded | 2642 | 1024 | 3247 |
| Retweets | 2065 | 80 | 1009 |
| Short tweets | 43 | 60 | 390 |
| Tweets kept | 534 | 884 | 1848 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4x0ck2xi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @basilhalperin-ben_golub-tylercowen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/fuzqv36t) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/fuzqv36t/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/basilhalperin-ben_golub-tylercowen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
income/jpq-genq-signal1m-question_encoder-base-msmarco-distilbert-tas-b
|
1d75ae8ae6d83decea0d7407e566a201b78e37fc
|
2022-06-16T17:45:36.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-genq-signal1m-question_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,196 |
---
license: apache-2.0
---
|
income/jpq-genq-signal1m-document_encoder-base-msmarco-distilbert-tas-b
|
2c0a9fa150723063b5990ca7fba0ffb4b20950ad
|
2022-06-16T17:46:02.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-genq-signal1m-document_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,197 |
---
license: apache-2.0
---
|
income/jpq-genq-trec-covid-question_encoder-base-msmarco-distilbert-tas-b
|
5c2de9edaf2e5a8aedb9c5025892dd964e2a806d
|
2022-06-16T17:46:35.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-genq-trec-covid-question_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,198 |
---
license: apache-2.0
---
|
income/jpq-genq-trec-covid-document_encoder-base-msmarco-distilbert-tas-b
|
1a4d6d3c793db42d2bc71627d39685d4fd25ec5a
|
2022-06-16T17:47:01.000Z
|
[
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false |
income
| null |
income/jpq-genq-trec-covid-document_encoder-base-msmarco-distilbert-tas-b
| 0 | null |
transformers
| 38,199 |
---
license: apache-2.0
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.