modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-01 12:29:10
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 547
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-01 12:28:04
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Elitay/Reptilian
|
Elitay
| 2022-11-20T15:12:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-20T02:22:58Z |
---
license: creativeml-openrail-m
---
Trained on "kobold", "lizardfolk", and "dragonborn". Using Dreambooth, trained for 6000, 10000, or 14000 steps. I recommend using the 14000 step model with a CFG 4-8. You may need to use the models that were trained for fewer steps if you're having difficulty getting certain elements in the image (e.g. hats).

You can also use a higher CFG if attempting to generate inked images. E.g: CFG 9 and "photo octane 3d render" in the negative prompt:

|
dpkmnit/bert-finetuned-squad
|
dpkmnit
| 2022-11-20T14:58:13Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-18T06:19:21Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: dpkmnit/bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dpkmnit/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7048
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 66549, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2092 | 0 |
| 0.7048 | 1 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.1
- Datasets 2.7.0
- Tokenizers 0.13.2
|
blkpst/ddpm-butterflies-128
|
blkpst
| 2022-11-20T14:36:54Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-20T13:20:58Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/blackpansuto/ddpm-butterflies-128/tensorboard?#scalars)
|
Bauyrjan/wav2vec2-kazakh
|
Bauyrjan
| 2022-11-20T14:31:30Z | 192 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-11T05:35:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-kazakh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-kazakh
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.0+cu117
- Datasets 1.13.3
- Tokenizers 0.10.3
|
akreal/mbart-large-50-finetuned-media
|
akreal
| 2022-11-20T13:32:58Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"mbart-50",
"fr",
"dataset:MEDIA",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-20T13:14:49Z |
---
language:
- fr
tags:
- mbart-50
license: apache-2.0
datasets:
- MEDIA
metrics:
- cer
- cver
---
This model is `mbart-large-50-many-to-many-mmt` model fine-tuned on the text part of [MEDIA](https://catalogue.elra.info/en-us/repository/browse/ELRA-S0272/) spoken language understanding dataset.
The scores on the test set are 16.50% and 19.09% for CER and CVER respectively.
|
Western1234/Modelop
|
Western1234
| 2022-11-20T12:55:18Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2022-11-20T12:53:42Z |
---
license: openrail
---
git lfs install
git clone https://huggingface.co/Western1234/Modelop
|
hungngocphat01/Checkpoint_zaloAI_11_19_2022
|
hungngocphat01
| 2022-11-20T11:59:05Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-20T11:53:29Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
model-index:
- name: Checkpoint_zaloAI_11_19_2022
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Checkpoint_zaloAI_11_19_2022
This model is a fine-tuned version of [nguyenvulebinh/wav2vec2-base-vietnamese-250h](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.3926
- eval_wer: 0.6743
- eval_runtime: 23.1283
- eval_samples_per_second: 39.865
- eval_steps_per_second: 5.016
- epoch: 25.07
- step: 26000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
youa/CreatTitle
|
youa
| 2022-11-20T11:54:27Z | 1 | 0 | null |
[
"pytorch",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-11-07T13:56:12Z |
---
license: bigscience-bloom-rail-1.0
---
|
zhiguoxu/bert-base-chinese-finetuned-ner-split_food
|
zhiguoxu
| 2022-11-20T09:32:56Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-20T08:25:39Z |
---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-chinese-finetuned-ner-split_food
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-ner-split_food
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0077
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 30
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6798 | 1.0 | 1 | 1.6743 | 0.0 |
| 1.8172 | 2.0 | 2 | 0.6580 | 0.0 |
| 0.746 | 3.0 | 3 | 0.4864 | 0.0 |
| 0.4899 | 4.0 | 4 | 0.3927 | 0.0 |
| 0.401 | 5.0 | 5 | 0.2753 | 0.0 |
| 0.2963 | 6.0 | 6 | 0.2160 | 0.0 |
| 0.2452 | 7.0 | 7 | 0.1848 | 0.5455 |
| 0.2188 | 8.0 | 8 | 0.1471 | 0.7692 |
| 0.1775 | 9.0 | 9 | 0.1131 | 0.7692 |
| 0.1469 | 10.0 | 10 | 0.0864 | 0.8293 |
| 0.1145 | 11.0 | 11 | 0.0621 | 0.9333 |
| 0.0881 | 12.0 | 12 | 0.0432 | 1.0 |
| 0.0702 | 13.0 | 13 | 0.0329 | 1.0 |
| 0.0531 | 14.0 | 14 | 0.0268 | 1.0 |
| 0.044 | 15.0 | 15 | 0.0184 | 1.0 |
| 0.0321 | 16.0 | 16 | 0.0129 | 1.0 |
| 0.0255 | 17.0 | 17 | 0.0101 | 1.0 |
| 0.0236 | 18.0 | 18 | 0.0087 | 1.0 |
| 0.0254 | 19.0 | 19 | 0.0080 | 1.0 |
| 0.0185 | 20.0 | 20 | 0.0077 | 1.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0+cu102
- Datasets 1.18.4
- Tokenizers 0.12.1
|
OpenMatch/cocodr-base-msmarco-warmup
|
OpenMatch
| 2022-11-20T08:26:41Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-20T08:20:01Z |
---
license: mit
---
---
license: mit
---
This model has been pretrained on BEIR corpus then finetuned on MS MARCO with BM25 warmup only, following the approach described in the paper **COCO-DR: Combating Distribution Shifts in Zero-Shot Dense Retrieval with Contrastive and Distributionally Robust Learning**. The associated GitHub repository is available here https://github.com/OpenMatch/COCO-DR.
This model is trained with BERT-base as the backbone with 110M hyperparameters.
|
zhiguoxu/bert-base-chinese-finetuned-ner-food
|
zhiguoxu
| 2022-11-20T08:20:01Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-19T05:47:41Z |
---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-chinese-finetuned-ner-food
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-ner-food
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0039
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0829 | 1.0 | 3 | 1.6749 | 0.0 |
| 1.5535 | 2.0 | 6 | 1.0327 | 0.6354 |
| 1.0573 | 3.0 | 9 | 0.6295 | 0.7097 |
| 0.5854 | 4.0 | 12 | 0.3763 | 0.8271 |
| 0.4292 | 5.0 | 15 | 0.2165 | 0.9059 |
| 0.2235 | 6.0 | 18 | 0.1121 | 0.9836 |
| 0.1535 | 7.0 | 21 | 0.0597 | 0.9975 |
| 0.0846 | 8.0 | 24 | 0.0337 | 0.9975 |
| 0.0613 | 9.0 | 27 | 0.0214 | 1.0 |
| 0.0365 | 10.0 | 30 | 0.0144 | 1.0 |
| 0.0302 | 11.0 | 33 | 0.0103 | 1.0 |
| 0.0182 | 12.0 | 36 | 0.0078 | 1.0 |
| 0.0175 | 13.0 | 39 | 0.0064 | 1.0 |
| 0.0115 | 14.0 | 42 | 0.0055 | 1.0 |
| 0.0124 | 15.0 | 45 | 0.0049 | 1.0 |
| 0.0117 | 16.0 | 48 | 0.0045 | 1.0 |
| 0.0111 | 17.0 | 51 | 0.0042 | 1.0 |
| 0.0102 | 18.0 | 54 | 0.0041 | 1.0 |
| 0.0096 | 19.0 | 57 | 0.0040 | 1.0 |
| 0.0095 | 20.0 | 60 | 0.0039 | 1.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0+cu102
- Datasets 1.18.4
- Tokenizers 0.12.1
|
huggingtweets/iwriteok
|
huggingtweets
| 2022-11-20T06:14:50Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/iwriteok/1668924855688/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/598663964340301824/im3Wzn-o_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Robert Evans (The Only Robert Evans)</div>
<div style="text-align: center; font-size: 14px;">@iwriteok</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Robert Evans (The Only Robert Evans).
| Data | Robert Evans (The Only Robert Evans) |
| --- | --- |
| Tweets downloaded | 3218 |
| Retweets | 1269 |
| Short tweets | 142 |
| Tweets kept | 1807 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3hjcp2ib/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iwriteok's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/wq4n95ia) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/wq4n95ia/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/iwriteok')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
amitjohn007/mpnet-finetuned
|
amitjohn007
| 2022-11-20T05:51:01Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"mpnet",
"question-answering",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-20T04:59:44Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: amitjohn007/mpnet-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# amitjohn007/mpnet-finetuned
This model is a fine-tuned version of [shaina/covid_qa_mpnet](https://huggingface.co/shaina/covid_qa_mpnet) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5882
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.0499 | 0 |
| 0.7289 | 1 |
| 0.5882 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.0
- Tokenizers 0.13.2
|
andreaschandra/unifiedqa-v2-t5-base-1363200-finetuned-causalqa-squad
|
andreaschandra
| 2022-11-20T05:42:57Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-16T13:33:49Z |
---
tags:
- generated_from_trainer
model-index:
- name: unifiedqa-v2-t5-base-1363200-finetuned-causalqa-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unifiedqa-v2-t5-base-1363200-finetuned-causalqa-squad
This model is a fine-tuned version of [allenai/unifiedqa-v2-t5-base-1363200](https://huggingface.co/allenai/unifiedqa-v2-t5-base-1363200) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7378 | 0.05 | 73 | 1.1837 |
| 0.6984 | 0.1 | 146 | 0.8918 |
| 0.4511 | 0.15 | 219 | 0.8342 |
| 0.4696 | 0.2 | 292 | 0.7642 |
| 0.295 | 0.25 | 365 | 0.7996 |
| 0.266 | 0.3 | 438 | 0.7773 |
| 0.2372 | 0.35 | 511 | 0.8592 |
| 0.2881 | 0.39 | 584 | 0.8440 |
| 0.2578 | 0.44 | 657 | 0.8306 |
| 0.2733 | 0.49 | 730 | 0.8228 |
| 0.2073 | 0.54 | 803 | 0.8419 |
| 0.2683 | 0.59 | 876 | 0.8241 |
| 0.2693 | 0.64 | 949 | 0.8573 |
| 0.355 | 0.69 | 1022 | 0.8204 |
| 0.2246 | 0.74 | 1095 | 0.8530 |
| 0.2468 | 0.79 | 1168 | 0.8410 |
| 0.3102 | 0.84 | 1241 | 0.8035 |
| 0.2115 | 0.89 | 1314 | 0.8262 |
| 0.1855 | 0.94 | 1387 | 0.8560 |
| 0.1772 | 0.99 | 1460 | 0.8747 |
| 0.1509 | 1.04 | 1533 | 0.9132 |
| 0.1871 | 1.09 | 1606 | 0.8920 |
| 0.1624 | 1.14 | 1679 | 0.9085 |
| 0.1404 | 1.18 | 1752 | 0.9460 |
| 0.1639 | 1.23 | 1825 | 0.9812 |
| 0.0983 | 1.28 | 1898 | 0.9790 |
| 0.1395 | 1.33 | 1971 | 0.9843 |
| 0.1439 | 1.38 | 2044 | 0.9877 |
| 0.1397 | 1.43 | 2117 | 1.0338 |
| 0.1095 | 1.48 | 2190 | 1.0589 |
| 0.1228 | 1.53 | 2263 | 1.0498 |
| 0.1246 | 1.58 | 2336 | 1.0923 |
| 0.1438 | 1.63 | 2409 | 1.0995 |
| 0.1305 | 1.68 | 2482 | 1.0867 |
| 0.1077 | 1.73 | 2555 | 1.1013 |
| 0.2104 | 1.78 | 2628 | 1.0765 |
| 0.1633 | 1.83 | 2701 | 1.0796 |
| 0.1658 | 1.88 | 2774 | 1.0314 |
| 0.1358 | 1.92 | 2847 | 0.9823 |
| 0.1571 | 1.97 | 2920 | 0.9826 |
| 0.1127 | 2.02 | 2993 | 1.0324 |
| 0.0927 | 2.07 | 3066 | 1.0679 |
| 0.0549 | 2.12 | 3139 | 1.1069 |
| 0.0683 | 2.17 | 3212 | 1.1624 |
| 0.0677 | 2.22 | 3285 | 1.1174 |
| 0.0615 | 2.27 | 3358 | 1.1431 |
| 0.0881 | 2.32 | 3431 | 1.1721 |
| 0.0807 | 2.37 | 3504 | 1.1885 |
| 0.0955 | 2.42 | 3577 | 1.1991 |
| 0.0779 | 2.47 | 3650 | 1.1999 |
| 0.11 | 2.52 | 3723 | 1.1774 |
| 0.0852 | 2.57 | 3796 | 1.2095 |
| 0.0616 | 2.62 | 3869 | 1.1824 |
| 0.072 | 2.67 | 3942 | 1.2397 |
| 0.1055 | 2.71 | 4015 | 1.2181 |
| 0.0806 | 2.76 | 4088 | 1.2159 |
| 0.0684 | 2.81 | 4161 | 1.1864 |
| 0.0869 | 2.86 | 4234 | 1.1816 |
| 0.1023 | 2.91 | 4307 | 1.1717 |
| 0.0583 | 2.96 | 4380 | 1.1477 |
| 0.0684 | 3.01 | 4453 | 1.1662 |
| 0.0319 | 3.06 | 4526 | 1.2174 |
| 0.0609 | 3.11 | 4599 | 1.1947 |
| 0.0435 | 3.16 | 4672 | 1.1821 |
| 0.0417 | 3.21 | 4745 | 1.1964 |
| 0.0502 | 3.26 | 4818 | 1.2140 |
| 0.0844 | 3.31 | 4891 | 1.2028 |
| 0.0692 | 3.36 | 4964 | 1.2215 |
| 0.0366 | 3.41 | 5037 | 1.2136 |
| 0.0615 | 3.46 | 5110 | 1.2224 |
| 0.0656 | 3.5 | 5183 | 1.2468 |
| 0.0469 | 3.55 | 5256 | 1.2554 |
| 0.0475 | 3.6 | 5329 | 1.2804 |
| 0.0998 | 3.65 | 5402 | 1.2035 |
| 0.0505 | 3.7 | 5475 | 1.2095 |
| 0.0459 | 3.75 | 5548 | 1.2064 |
| 0.0256 | 3.8 | 5621 | 1.2164 |
| 0.0831 | 3.85 | 5694 | 1.2154 |
| 0.0397 | 3.9 | 5767 | 1.2126 |
| 0.0449 | 3.95 | 5840 | 1.2174 |
| 0.0322 | 4.0 | 5913 | 1.2288 |
| 0.059 | 4.05 | 5986 | 1.2274 |
| 0.0382 | 4.1 | 6059 | 1.2228 |
| 0.0202 | 4.15 | 6132 | 1.2177 |
| 0.0328 | 4.2 | 6205 | 1.2305 |
| 0.0407 | 4.24 | 6278 | 1.2342 |
| 0.0356 | 4.29 | 6351 | 1.2448 |
| 0.0414 | 4.34 | 6424 | 1.2537 |
| 0.0448 | 4.39 | 6497 | 1.2540 |
| 0.0545 | 4.44 | 6570 | 1.2552 |
| 0.0492 | 4.49 | 6643 | 1.2570 |
| 0.0293 | 4.54 | 6716 | 1.2594 |
| 0.0498 | 4.59 | 6789 | 1.2562 |
| 0.0349 | 4.64 | 6862 | 1.2567 |
| 0.0497 | 4.69 | 6935 | 1.2550 |
| 0.0194 | 4.74 | 7008 | 1.2605 |
| 0.0255 | 4.79 | 7081 | 1.2590 |
| 0.0212 | 4.84 | 7154 | 1.2571 |
| 0.0231 | 4.89 | 7227 | 1.2583 |
| 0.0399 | 4.94 | 7300 | 1.2580 |
| 0.0719 | 4.99 | 7373 | 1.2574 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
yip-i/wav2vec2-demo-F03
|
yip-i
| 2022-11-20T04:56:47Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-15T03:43:04Z |
---
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-demo-F03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-demo-F03
This model is a fine-tuned version of [yip-i/uaspeech-pretrained](https://huggingface.co/yip-i/uaspeech-pretrained) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8742
- Wer: 1.2914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.4808 | 0.97 | 500 | 3.0628 | 1.1656 |
| 2.9947 | 1.94 | 1000 | 3.0334 | 1.1523 |
| 2.934 | 2.91 | 1500 | 3.0520 | 1.1648 |
| 2.9317 | 3.88 | 2000 | 3.3808 | 1.0 |
| 3.0008 | 4.85 | 2500 | 3.0342 | 1.2559 |
| 3.112 | 5.83 | 3000 | 3.1228 | 1.1258 |
| 2.8972 | 6.8 | 3500 | 2.9885 | 1.2914 |
| 2.8911 | 7.77 | 4000 | 3.2586 | 1.2754 |
| 2.9884 | 8.74 | 4500 | 3.0487 | 1.2090 |
| 2.873 | 9.71 | 5000 | 2.9382 | 1.2914 |
| 3.3551 | 10.68 | 5500 | 3.2607 | 1.2844 |
| 3.6426 | 11.65 | 6000 | 3.0053 | 1.0242 |
| 2.9184 | 12.62 | 6500 | 2.9219 | 1.2828 |
| 2.8384 | 13.59 | 7000 | 2.9530 | 1.2816 |
| 2.8855 | 14.56 | 7500 | 2.9978 | 1.0121 |
| 2.8479 | 15.53 | 8000 | 2.9722 | 1.0977 |
| 2.8241 | 16.5 | 8500 | 2.9670 | 1.3082 |
| 2.807 | 17.48 | 9000 | 2.9841 | 1.2914 |
| 2.8115 | 18.45 | 9500 | 2.9484 | 1.2977 |
| 2.8123 | 19.42 | 10000 | 2.9310 | 1.2914 |
| 3.0291 | 20.39 | 10500 | 2.9665 | 1.2902 |
| 2.8735 | 21.36 | 11000 | 2.9245 | 1.1160 |
| 2.8164 | 22.33 | 11500 | 2.9137 | 1.2914 |
| 2.8084 | 23.3 | 12000 | 2.9543 | 1.1891 |
| 2.8079 | 24.27 | 12500 | 2.9179 | 1.4516 |
| 2.7916 | 25.24 | 13000 | 2.8971 | 1.2926 |
| 2.7824 | 26.21 | 13500 | 2.8990 | 1.2914 |
| 2.7555 | 27.18 | 14000 | 2.9004 | 1.2914 |
| 2.7803 | 28.16 | 14500 | 2.8747 | 1.2910 |
| 2.753 | 29.13 | 15000 | 2.8742 | 1.2914 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
Alred/t5-small-finetuned-summarization-cnn-ver3
|
Alred
| 2022-11-20T03:41:44Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-11-20T02:50:30Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: t5-small-finetuned-summarization-cnn-ver3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-summarization-cnn-ver3
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1072
- Bertscore-mean-precision: 0.8861
- Bertscore-mean-recall: 0.8592
- Bertscore-mean-f1: 0.8723
- Bertscore-median-precision: 0.8851
- Bertscore-median-recall: 0.8582
- Bertscore-median-f1: 0.8719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bertscore-mean-precision | Bertscore-mean-recall | Bertscore-mean-f1 | Bertscore-median-precision | Bertscore-median-recall | Bertscore-median-f1 |
|:-------------:|:-----:|:----:|:---------------:|:------------------------:|:---------------------:|:-----------------:|:--------------------------:|:-----------------------:|:-------------------:|
| 2.0168 | 1.0 | 718 | 2.0528 | 0.8870 | 0.8591 | 0.8727 | 0.8864 | 0.8578 | 0.8724 |
| 1.8387 | 2.0 | 1436 | 2.0610 | 0.8863 | 0.8591 | 0.8723 | 0.8848 | 0.8575 | 0.8712 |
| 1.7302 | 3.0 | 2154 | 2.0659 | 0.8856 | 0.8588 | 0.8719 | 0.8847 | 0.8569 | 0.8717 |
| 1.6459 | 4.0 | 2872 | 2.0931 | 0.8860 | 0.8592 | 0.8722 | 0.8850 | 0.8570 | 0.8718 |
| 1.5907 | 5.0 | 3590 | 2.1072 | 0.8861 | 0.8592 | 0.8723 | 0.8851 | 0.8582 | 0.8719 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Jellywibble/dalio-convo-finetune-restruct
|
Jellywibble
| 2022-11-20T02:39:45Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-19T19:41:56Z |
---
tags:
- text-generation
library_name: transformers
---
## Model description
Based on Jellywibble/dalio-pretrained-book-bs4-seed1 which was pre-trained on the Dalio Principles Book
Finetuned on handwritten conversations Jellywibble/dalio_handwritten-conversations
## Dataset Used
Jellywibble/dalio_handwritten-conversations
## Training Parameters
- Deepspeed on 4xA40 GPUs
- Ensuring EOS token `<s>` appears only at the beginning of each 'This is a conversation where Ray ...'
- Gradient Accumulation steps = 1 (Effective batch size of 4)
- 2e-6 Learning Rate, AdamW optimizer
- Block size of 1000
- Trained for 1 Epoch (additional epochs yielded worse Hellaswag result)
## Metrics
- Hellaswag Perplexity: 29.83
- Eval accuracy: 58.1%
- Eval loss: 1.883
- Checkpoint 9 uploaded
- Wandb run: https://wandb.ai/jellywibble/huggingface/runs/157eehn9?workspace=user-jellywibble
|
Jellywibble/dalio-principles-pretrain-v2
|
Jellywibble
| 2022-11-20T01:55:33Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-19T19:42:56Z |
---
tags:
- text-generation
library_name: transformers
---
## Model description
Based off facebook/opt-30b model, finetuned on chucked Dalio responses
## Dataset Used
Jellywibble/dalio-pretrain-book-dataset-v2
## Training Parameters
- Deepspeed on 4xA40 GPUs
- Ensuring EOS token `<s>` appears only at the beginning of each chunk
- Gradient Accumulation steps = 1 (Effective batch size of 4)
- 3e-6 Learning Rate, AdamW optimizer
- Block size of 800
- Trained for 1 Epoch (additional epochs yielded worse Hellaswag result)
## Metrics
- Hellaswag Perplexity: 30.2
- Eval accuracy: 49.8%
- Eval loss: 2.283
- Checkpoint 16 uploaded
- wandb run: https://wandb.ai/jellywibble/huggingface/runs/2vtr39rk?workspace=user-jellywibble
|
Deepthoughtworks/gpt-neo-2.7B__low-cpu
|
Deepthoughtworks
| 2022-11-19T23:20:13Z | 44 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"rust",
"gpt_neo",
"text-generation",
"text generation",
"causal-lm",
"en",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-11T11:35:56Z |
---
language:
- en
tags:
- text generation
- pytorch
- causal-lm
license: apache-2.0
---
# GPT-Neo 2.7B
## Model Description
GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model.
## Training data
GPT-Neo 2.7B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model.
## Training procedure
This model was trained for 420 billion tokens over 400,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss.
## Intended Use and Limitations
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')
>>> generator("EleutherAI has", do_sample=True, min_length=50)
[{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Eval results
All evaluations were done using our [evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness). Some results for GPT-2 and GPT-3 are inconsistent with the values reported in the respective papers. We are currently looking into why, and would greatly appreciate feedback and further testing of our eval harness. If you would like to contribute evaluations you have done, please reach out on our [Discord](https://discord.gg/vtRgjbM).
### Linguistic Reasoning
| Model and Size | Pile BPB | Pile PPL | Wikitext PPL | Lambada PPL | Lambada Acc | Winogrande | Hellaswag |
| ---------------- | ---------- | ---------- | ------------- | ----------- | ----------- | ---------- | ----------- |
| GPT-Neo 1.3B | 0.7527 | 6.159 | 13.10 | 7.498 | 57.23% | 55.01% | 38.66% |
| GPT-2 1.5B | 1.0468 | ----- | 17.48 | 10.634 | 51.21% | 59.40% | 40.03% |
| **GPT-Neo 2.7B** | **0.7165** | **5.646** | **11.39** | **5.626** | **62.22%** | **56.50%** | **42.73%** |
| GPT-3 Ada | 0.9631 | ----- | ----- | 9.954 | 51.60% | 52.90% | 35.93% |
### Physical and Scientific Reasoning
| Model and Size | MathQA | PubMedQA | Piqa |
| ---------------- | ---------- | ---------- | ----------- |
| GPT-Neo 1.3B | 24.05% | 54.40% | 71.11% |
| GPT-2 1.5B | 23.64% | 58.33% | 70.78% |
| **GPT-Neo 2.7B** | **24.72%** | **57.54%** | **72.14%** |
| GPT-3 Ada | 24.29% | 52.80% | 68.88% |
### Down-Stream Applications
TBD
### BibTeX entry and citation info
To cite this model, use
```bibtex
@software{gpt-neo,
author = {Black, Sid and
Leo, Gao and
Wang, Phil and
Leahy, Connor and
Biderman, Stella},
title = {{GPT-Neo: Large Scale Autoregressive Language
Modeling with Mesh-Tensorflow}},
month = mar,
year = 2021,
note = {{If you use this software, please cite it using
these metadata.}},
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.5297715},
url = {https://doi.org/10.5281/zenodo.5297715}
}
@article{gao2020pile,
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
|
cahya/t5-base-indonesian-summarization-cased
|
cahya
| 2022-11-19T20:41:24Z | 497 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"pipeline:summarization",
"summarization",
"id",
"dataset:id_liputan6",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language: id
tags:
- pipeline:summarization
- summarization
- t5
datasets:
- id_liputan6
---
# Indonesian T5 Summarization Base Model
Finetuned T5 base summarization model for Indonesian.
## Finetuning Corpus
`t5-base-indonesian-summarization-cased` model is based on `t5-base-bahasa-summarization-cased` by [huseinzol05](https://huggingface.co/huseinzol05), finetuned using [id_liputan6](https://huggingface.co/datasets/id_liputan6) dataset.
## Load Finetuned Model
```python
from transformers import T5Tokenizer, T5Model, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("cahya/t5-base-indonesian-summarization-cased")
model = T5ForConditionalGeneration.from_pretrained("cahya/t5-base-indonesian-summarization-cased")
```
## Code Sample
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("cahya/t5-base-indonesian-summarization-cased")
model = T5ForConditionalGeneration.from_pretrained("cahya/t5-base-indonesian-summarization-cased")
#
ARTICLE_TO_SUMMARIZE = ""
# generate summary
input_ids = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors='pt')
summary_ids = model.generate(input_ids,
min_length=20,
max_length=80,
num_beams=10,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True,
no_repeat_ngram_size=2,
use_cache=True,
do_sample = True,
temperature = 0.8,
top_k = 50,
top_p = 0.95)
summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(summary_text)
```
Output:
```
```
|
ocm/xlm-roberta-base-finetuned-panx-de
|
ocm
| 2022-11-19T20:26:55Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-19T20:02:31Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
fernanda-dionello/good-reads-string
|
fernanda-dionello
| 2022-11-19T20:16:34Z | 99 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:fernanda-dionello/autotrain-data-autotrain_goodreads_string",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-19T20:11:24Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- fernanda-dionello/autotrain-data-autotrain_goodreads_string
co2_eq_emissions:
emissions: 0.04700680417595474
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2164069744
- CO2 Emissions (in grams): 0.0470
## Validation Metrics
- Loss: 0.806
- Accuracy: 0.686
- Macro F1: 0.534
- Micro F1: 0.686
- Weighted F1: 0.678
- Macro Precision: 0.524
- Micro Precision: 0.686
- Weighted Precision: 0.673
- Macro Recall: 0.551
- Micro Recall: 0.686
- Weighted Recall: 0.686
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/fernanda-dionello/autotrain-autotrain_goodreads_string-2164069744
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("fernanda-dionello/autotrain-autotrain_goodreads_string-2164069744", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("fernanda-dionello/autotrain-autotrain_goodreads_string-2164069744", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
chieunq/XLM-R-base-finetuned-uit-vquad-1
|
chieunq
| 2022-11-19T20:02:14Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"vi",
"dataset:uit-vquad",
"arxiv:2009.14725",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-19T19:00:55Z |
---
language: vi
tags:
- vi
- xlm-roberta
widget:
- text: 3 thành viên trong nhóm gồm những ai ?
context: "Nhóm của chúng tôi là sinh viên năm 4 trường ĐH Công Nghệ - ĐHQG Hà Nội. Nhóm gồm 3 thành viên: Nguyễn Quang Chiều, Nguyễn Quang Huy và Nguyễn Trần Anh Đức . Đây là pha Reader trong dự án cuồi kì môn Các vấn đề hiện đại trong CNTT của nhóm ."
datasets:
- uit-vquad
metrics:
- EM (exact match) : 60.63
- F1 : 79.63
---
We fined-tune model XLM-Roberta-base in UIT-vquad dataset (https://arxiv.org/pdf/2009.14725.pdf)
### Performance
- EM (exact match) : 60.63
- F1 : 79.63
### How to run
```
from transformers import pipeline
# Replace this with your own checkpoint
model_checkpoint = "chieunq/XLM-R-base-finetuned-uit-vquad-1"
question_answerer = pipeline("question-answering", model=model_checkpoint)
context = """
Nhóm của chúng tôi là sinh viên năm 4 trường ĐH Công Nghệ - ĐHQG Hà Nội. Nhóm gồm 3 thành viên : Nguyễn Quang Chiều, Nguyễn Quang Huy và Nguyễn Trần Anh Đức . Đây là pha Reader trong dự án cuồi kì môn Các vấn đề hiện đại trong CNTT của nhóm .
"""
question = "3 thành viên trong nhóm gồm những ai ?"
question_answerer(question=question, context=context)
```
### Output
```
{'score': 0.9928902387619019,
'start': 98,
'end': 158,
'answer': 'Nguyễn Quang Chiều, Nguyễn Quang Huy và Nguyễn Trần Anh Đức.'}
```
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Froddan/furiostyle
|
Froddan
| 2022-11-19T19:28:35Z | 0 | 3 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:cc0-1.0",
"region:us"
] |
text-to-image
| 2022-11-19T19:10:50Z |
---
license: cc0-1.0
inference: false
language:
- en
tags:
- stable-diffusion
- text-to-image
---
# Stable Diffusion fine tuned on art by [Furio Tedeshi](https://www.furiotedeschi.com/)
### Usage
Use by adding the keyword "furiostyle" to the prompt. The model was trained with the "demon" classname, which can also be added to the prompt.
## Samples
For this model I made two checkpoints. The "furiostyle demon x2" model is trained for twice as long as the regular checkpoint, meaning it should be more fine tuned on the style but also more rigid. The top 4 images are from the regular version, the rest are from the x2 version. I hope it gives you an idea of what kind of styles can be created with this model. I think the x2 model got better results this time around, if you would compare the dog and the mushroom.
<img src="https://huggingface.co/Froddan/furiostyle/resolve/main/1000_2.png" width="256px"/>
<img src="https://huggingface.co/Froddan/furiostyle/resolve/main/1000_4.png" width="256px"/>
<img src="https://huggingface.co/Froddan/furiostyle/resolve/main/dog_1000_2.png" width="256px"/>
<img src="https://huggingface.co/Froddan/furiostyle/resolve/main/mushroom_1000_2.png" width="256px"/>
<img src="https://huggingface.co/Froddan/furiostyle/resolve/main/2000_1.png" width="256px"/>
<img src="https://huggingface.co/Froddan/furiostyle/resolve/main/2000_4.png" width="256px"/>
<img src="https://huggingface.co/Froddan/furiostyle/resolve/main/mushroom_cave_4.png" width="256px"/>
<img src="https://huggingface.co/Froddan/furiostyle/resolve/main/mushroom_cave_ornate.png" width="256px"/>
<img src="https://huggingface.co/Froddan/furiostyle/resolve/main/dog_2.png" width="256px"/>
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
|
Froddan/bulgarov
|
Froddan
| 2022-11-19T19:23:36Z | 0 | 1 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:cc0-1.0",
"region:us"
] |
text-to-image
| 2022-11-19T16:11:02Z |
---
license: cc0-1.0
inference: false
language:
- en
tags:
- stable-diffusion
- text-to-image
---
# Stable Diffusion fine tuned on art by [Vitaly Bulgarov](https://www.artstation.com/vbulgarov)
### Usage
Use by adding the keyword "bulgarovstyle" to the prompt. The model was trained with the "knight" classname, which can also be added to the prompt.
## Samples
For this model I made two checkpoints. The "bulgarovstyle knight x2" model is trained for twice as long as the regular checkpoint, meaning it should be more fine tuned on the style but also more rigid. The top 3 images are from the regular version, the rest are from the x2 version (I think). I hope it gives you an idea of what kind of styles can be created with this model.
<img src="https://huggingface.co/Froddan/bulgarov/resolve/main/dog_v1_1.png" width="256px"/>
<img src="https://huggingface.co/Froddan/bulgarov/resolve/main/greg_v1.png" width="256px"/>
<img src="https://huggingface.co/Froddan/bulgarov/resolve/main/greg3.png" width="256px"/>
<img src="https://huggingface.co/Froddan/bulgarov/resolve/main/index4.png" width="256px"/>
<img src="https://huggingface.co/Froddan/bulgarov/resolve/main/index_1600_2.png" width="256px"/>
<img src="https://huggingface.co/Froddan/bulgarov/resolve/main/index_1600_4.png" width="256px"/>
<img src="https://huggingface.co/Froddan/bulgarov/resolve/main/tmp1zir5pbb.png" width="256px"/>
<img src="https://huggingface.co/Froddan/bulgarov/resolve/main/tmp6lk0vp7p.png" width="256px"/>
<img src="https://huggingface.co/Froddan/bulgarov/resolve/main/tmpgabti6yx.png" width="256px"/>
<img src="https://huggingface.co/Froddan/bulgarov/resolve/main/tmpgvytng2n.png" width="256px"/>
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
|
stephenhbarlow/biobert-base-cased-v1.2-multiclass-finetuned-PET2
|
stephenhbarlow
| 2022-11-19T18:53:28Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-19T16:45:29Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: biobert-base-cased-v1.2-multiclass-finetuned-PET2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2-multiclass-finetuned-PET2
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8075
- Accuracy: 0.5673
- F1: 0.4253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0175 | 1.0 | 14 | 0.8446 | 0.5625 | 0.4149 |
| 0.8634 | 2.0 | 28 | 0.8075 | 0.5673 | 0.4253 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
kormilitzin/en_core_med7_trf
|
kormilitzin
| 2022-11-19T18:51:54Z | 375 | 12 |
spacy
|
[
"spacy",
"token-classification",
"en",
"license:mit",
"model-index",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- spacy
- token-classification
language:
- en
license: mit
model-index:
- name: en_core_med7_trf
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8822157434
- name: NER Recall
type: recall
value: 0.925382263
- name: NER F Score
type: f_score
value: 0.9032835821
---
| Feature | Description |
| --- | --- |
| **Name** | `en_core_med7_trf` |
| **Version** | `3.4.2.1` |
| **spaCy** | `>=3.4.2,<3.5.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | `MIT` |
| **Author** | [Andrey Kormilitzin](https://www.kormilitzin.com/) |
### Label Scheme
<details>
<summary>View label scheme (7 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `DOSAGE`, `DRUG`, `DURATION`, `FORM`, `FREQUENCY`, `ROUTE`, `STRENGTH` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 90.33 |
| `ENTS_P` | 88.22 |
| `ENTS_R` | 92.54 |
| `TRANSFORMER_LOSS` | 2502627.06 |
| `NER_LOSS` | 114576.77 |
### BibTeX entry and citation info
```bibtex
@article{kormilitzin2021med7,
title={Med7: A transferable clinical natural language processing model for electronic health records},
author={Kormilitzin, Andrey and Vaci, Nemanja and Liu, Qiang and Nevado-Holgado, Alejo},
journal={Artificial Intelligence in Medicine},
volume={118},
pages={102086},
year={2021},
publisher={Elsevier}
}
```
|
Froddan/hurrishiny
|
Froddan
| 2022-11-19T18:34:05Z | 0 | 1 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:cc0-1.0",
"region:us"
] |
text-to-image
| 2022-11-19T15:14:11Z |
---
license: cc0-1.0
inference: false
language:
- en
tags:
- stable-diffusion
- text-to-image
---
# Stable Diffusion fine tuned on art by [Björn Hurri](https://www.artstation.com/bjornhurri)
This model is fine tuned on some of his "shiny"-style paintings. I also have a version for his "matte" works.
### Usage
Use by adding the keyword "hurrishiny" to the prompt. The model was trained with the "monster" classname, which can also be added to the prompt.
## Samples
For this model I made two checkpoints. The "hurrishiny monster x2" model is trained for twice as long as the regular checkpoint, meaning it should be more fine tuned on the style but also more rigid. The top 4 images are from the regular version, the rest are from the x2 version. I hope it gives you an idea of what kind of styles can be created with this model.
<img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/1700_1.png" width="256px"/>
<img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/1700_2.png" width="256px"/>
<img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/1700_3.png" width="256px"/>
<img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/1700_4.png" width="256px"/>
<img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/3400_1.png" width="256px"/>
<img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/3400_2.png" width="256px"/>
<img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/3400_3.png" width="256px"/>
<img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/3400_4.png" width="256px"/>
<img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/index1.png" width="256px"/>
<img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/index3.png" width="256px"/>
<img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/index5.png" width="256px"/>
<img src="https://huggingface.co/Froddan/hurrishiny/resolve/main/index6.png" width="256px"/>
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
|
yunseokj/ddpm-butterflies-128
|
yunseokj
| 2022-11-19T18:20:57Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-19T17:31:45Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/yunseokj/ddpm-butterflies-128/tensorboard?#scalars)
|
Froddan/hurrimatte
|
Froddan
| 2022-11-19T18:11:55Z | 0 | 1 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:cc0-1.0",
"region:us"
] |
text-to-image
| 2022-11-19T15:10:08Z |
---
license: cc0-1.0
inference: false
language:
- en
tags:
- stable-diffusion
- text-to-image
---
# Stable Diffusion fine tuned on art by [Björn Hurri](https://www.artstation.com/bjornhurri)
This model is fine tuned on some of his matte-style paintings. I also have a version for his "shinier" works.
### Usage
Use by adding the keyword "hurrimatte" to the prompt. The model was trained with the "monster" classname, which can also be added to the prompt.
## Samples
For this model I made two checkpoints. The "hurrimatte monster x2" model is trained for twice as long as the regular checkpoint, meaning it should be more fine tuned on the style but also more rigid. The top 3 images are from the regular version, the rest are from the x2 version. I hope it gives you an idea of what kind of styles can be created with this model.
<img src="https://huggingface.co/Froddan/hurrimatte/resolve/main/index_1200_3.png" width="256px"/>
<img src="https://huggingface.co/Froddan/hurrimatte/resolve/main/index_1200_4.png" width="256px"/>
<img src="https://huggingface.co/Froddan/hurrimatte/resolve/main/1200_4.png" width="256px"/>
<img src="https://huggingface.co/Froddan/hurrimatte/resolve/main/index2.png" width="256px"/>
<img src="https://huggingface.co/Froddan/hurrimatte/resolve/main/index3.png" width="256px"/>
<img src="https://huggingface.co/Froddan/hurrimatte/resolve/main/index_2400_5.png" width="256px"/>
<img src="https://huggingface.co/Froddan/hurrimatte/resolve/main/index_2400_6.png" width="256px"/>
<img src="https://huggingface.co/Froddan/hurrimatte/resolve/main/index_2400_7.png" width="256px"/>
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
|
LaurentiuStancioiu/distilbert-base-uncased-finetuned-emotion
|
LaurentiuStancioiu
| 2022-11-19T18:09:47Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-19T17:54:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.902
- name: F1
type: f1
value: 0.9000722917492663
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3554
- Accuracy: 0.902
- F1: 0.9001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0993 | 1.0 | 125 | 0.5742 | 0.8045 | 0.7747 |
| 0.4436 | 2.0 | 250 | 0.3554 | 0.902 | 0.9001 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
sd-concepts-library/ghibli-face
|
sd-concepts-library
| 2022-11-19T17:52:39Z | 0 | 4 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-19T17:52:35Z |
---
license: mit
---
### ghibli-face on Stable Diffusion
This is the `<ghibli-face>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
monakth/distilbert-base-cased-finetuned-squadv2
|
monakth
| 2022-11-19T17:02:46Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-19T17:01:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-cased-finetuned-squadv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-finetuned-squadv
This model is a fine-tuned version of [monakth/distilbert-base-cased-finetuned-squad](https://huggingface.co/monakth/distilbert-base-cased-finetuned-squad) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
4eJIoBek/green_elephant_jukebox1b
|
4eJIoBek
| 2022-11-19T16:22:09Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2022-10-01T09:08:50Z |
---
license: openrail
---
это очень плохой файнтюн 1b jukebox модели на ~25 минутах ремиксов с зелёным слоником, а точнее на тех моментах, где используется момент, где пахом пытался петь(та-тратарутару та типо так), демки есть в файлах. датасет потерял.
КАК ИСПОЛЬЗОВАТЬ?
распаковать архив и папку juke переместить в корень гуглодиска. затем открыть inference.ipynb в колабе.
|
Harrier/dqn-SpaceInvadersNoFrameskip-v4
|
Harrier
| 2022-11-19T15:53:13Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-19T15:52:33Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 615.50 +/- 186.61
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Harrier -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Harrier -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Harrier
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
katboi01/rare-puppers
|
katboi01
| 2022-11-19T15:04:01Z | 186 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-19T15:03:49Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.89552241563797
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu

|
nypnop/distilbert-base-uncased-finetuned-bbc-news
|
nypnop
| 2022-11-19T14:09:27Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-18T14:57:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-bbc-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-bbc-news
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0107
- Accuracy: 0.9955
- F1: 0.9955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3463 | 0.84 | 500 | 0.0392 | 0.9865 | 0.9865 |
| 0.0447 | 1.68 | 1000 | 0.0107 | 0.9955 | 0.9955 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
vikram15/bert-finetuned-ner
|
vikram15
| 2022-11-19T13:21:37Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-19T13:03:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9309775429326288
- name: Recall
type: recall
value: 0.9488387748232918
- name: F1
type: f1
value: 0.9398233038839806
- name: Accuracy
type: accuracy
value: 0.9861806087007712
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0630
- Precision: 0.9310
- Recall: 0.9488
- F1: 0.9398
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0911 | 1.0 | 1756 | 0.0702 | 0.9197 | 0.9345 | 0.9270 | 0.9826 |
| 0.0336 | 2.0 | 3512 | 0.0623 | 0.9294 | 0.9480 | 0.9386 | 0.9864 |
| 0.0174 | 3.0 | 5268 | 0.0630 | 0.9310 | 0.9488 | 0.9398 | 0.9862 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
beyond/genius-base
|
beyond
| 2022-11-19T11:59:46Z | 104 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"GENIUS",
"conditional text generation",
"sketch-based text generation",
"data augmentation",
"en",
"zh",
"dataset:c4",
"dataset:beyond/chinese_clean_passages_80m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-08T06:26:13Z |
---
language:
- en
- zh
tags:
- GENIUS
- conditional text generation
- sketch-based text generation
- data augmentation
license: apache-2.0
datasets:
- c4
- beyond/chinese_clean_passages_80m
widget:
- text: "<mask> Conference on Empirical Methods <mask> submission of research papers <mask> Deep Learning <mask>"
example_title: "Example 1"
- text: "<mask> machine learning <mask> my research interest <mask> data science <mask>"
example_title: "Example 2"
- text: "<mask> play basketball <mask> a strong team <mask> Shanghai University of Finance and Economics <mask> last Sunday <mask>"
example_title: "Example 3"
- text: "Good news: <mask> the European Union <mask> month by EU <mask> Farm Commissioner Franz <mask>"
example_title: "Example with a prompt 1"
- text: "Bad news: <mask> the European Union <mask> month by EU <mask> Farm Commissioner Franz <mask>"
example_title: "Example with a prompt 2"
inference:
parameters:
max_length: 200
num_beams: 3
do_sample: True
---
# 💡GENIUS – generating text using sketches!
**基于草稿的文本生成模型**
- **Paper: [GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation](https://github.com/beyondguo/genius/blob/master/GENIUS_gby_arxiv.pdf)**
💡**GENIUS** is a powerful conditional text generation model using sketches as input, which can fill in the missing contexts for a given **sketch** (key information consisting of textual spans, phrases, or words, concatenated by mask tokens). GENIUS is pre-trained on a large- scale textual corpus with a novel *reconstruction from sketch* objective using an *extreme and selective masking* strategy, enabling it to generate diverse and high-quality texts given sketches.
**Example 1:**
- sketch: `__ machine learning __ my research interest __ data science __`
- **GENIUS**: `I am a Ph.D. student in machine learning, and my research interest is in data science. I am interested in understanding how humans and machines interact and how we can improve the quality of life for people around the world.`
**Example 2:**
- sketch: `自然语言处理__谷歌__通用人工智能__`
- **GENIUS**: `自然语言处理是谷歌在通用人工智能领域的一个重要研究方向,其目的是为了促进人类智能的发展。 `
**GENIUS** can also be used as a general textual **data augmentation tool** for **various NLP tasks** (including sentiment analysis, topic classification, NER, and QA).

- Models hosted in 🤗 Huggingface:
**Model variations:**
| Model | #params | Language | comment|
|------------------------|--------------------------------|-------|---------|
| [`genius-large`](https://huggingface.co/beyond/genius-large) | 406M | English | The version used in **paper** (recommend) |
| [`genius-large-k2t`](https://huggingface.co/beyond/genius-large-k2t) | 406M | English | keywords-to-text |
| [`genius-base`](https://huggingface.co/beyond/genius-base) | 139M | English | smaller version |
| [`genius-base-ps`](https://huggingface.co/beyond/genius-base) | 139M | English | pre-trained both in paragraphs and short sentences |
| [`genius-base-chinese`](https://huggingface.co/beyond/genius-base-chinese) | 116M | 中文 | 在一千万纯净中文段落上预训练|

More Examples:

## Usage
### What is a sketch?
First, what is a **sketch**? As defined in our paper, a sketch is "key information consisting of textual spans, phrases, or words, concatenated by mask tokens". It's like a draft or framework when you begin to write an article. With GENIUS model, you can input some key elements you want to mention in your wrinting, then the GENIUS model can generate cohrent text based on your sketch.
The sketch which can be composed of:
- keywords /key-phrases, like `__NLP__AI__computer__science__`
- spans, like `Conference on Empirical Methods__submission of research papers__`
- sentences, like `I really like machine learning__I work at Google since last year__`
- or a mixup!
### How to use the model
#### 1. If you already have a sketch in mind, and want to get a paragraph based on it...
```python
from transformers import pipeline
# 1. load the model with the huggingface `pipeline`
genius = pipeline("text2text-generation", model='beyond/genius-large', device=0)
# 2. provide a sketch (joint by <mask> tokens)
sketch = "<mask> Conference on Empirical Methods <mask> submission of research papers <mask> Deep Learning <mask>"
# 3. here we go!
generated_text = genius(sketch, num_beams=3, do_sample=True, max_length=200)[0]['generated_text']
print(generated_text)
```
Output:
```shell
'The Conference on Empirical Methods welcomes the submission of research papers. Abstracts should be in the form of a paper or presentation. Please submit abstracts to the following email address: eemml.stanford.edu. The conference will be held at Stanford University on April 1618, 2019. The theme of the conference is Deep Learning.'
```
If you have a lot of sketches, you can batch-up your sketches to a Huggingface `Dataset` object, which can be much faster.
TODO: we are also building a python package for more convenient use of GENIUS, which will be released in few weeks.
#### 2. If you have an NLP dataset (e.g. classification) and want to do data augmentation to enlarge your dataset...
Please check [genius/augmentation_clf](https://github.com/beyondguo/genius/tree/master/augmentation_clf) and [genius/augmentation_ner_qa](https://github.com/beyondguo/genius/tree/master/augmentation_ner_qa), where we provide ready-to-run scripts for data augmentation for text classification/NER/MRC tasks.
## Augmentation Experiments:
Data augmentation is an important application for natural language generation (NLG) models, which is also a valuable evaluation of whether the generated text can be used in real applications.
- Setting: Low-resource setting, where only n={50,100,200,500,1000} labeled samples are available for training. The below results are the average of all training sizes.
- Text Classification Datasets: [HuffPost](https://huggingface.co/datasets/khalidalt/HuffPost), [BBC](https://huggingface.co/datasets/SetFit/bbc-news), [SST2](https://huggingface.co/datasets/glue), [IMDB](https://huggingface.co/datasets/imdb), [Yahoo](https://huggingface.co/datasets/yahoo_answers_topics), [20NG](https://huggingface.co/datasets/newsgroup).
- Base classifier: [DistilBERT](https://huggingface.co/distilbert-base-cased)
In-distribution (ID) evaluations:
| Method | Huff | BBC | Yahoo | 20NG | IMDB | SST2 | avg. |
|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|
| none | 79.17 | **96.16** | 45.77 | 46.67 | 77.87 | 76.67 | 70.39 |
| EDA | 79.20 | 95.11 | 45.10 | 46.15 | 77.88 | 75.52 | 69.83 |
| BackT | 80.48 | 95.28 | 46.10 | 46.61 | 78.35 | 76.96 | 70.63 |
| MLM | 80.04 | 96.07 | 45.35 | 46.53 | 75.73 | 76.61 | 70.06 |
| C-MLM | 80.60 | 96.13 | 45.40 | 46.36 | 77.31 | 76.91 | 70.45 |
| LAMBADA | 81.46 | 93.74 | 50.49 | 47.72 | 78.22 | 78.31 | 71.66 |
| STA | 80.74 | 95.64 | 46.96 | 47.27 | 77.88 | 77.80 | 71.05 |
| **GeniusAug** | 81.43 | 95.74 | 49.60 | 50.38 | **80.16** | 78.82 | 72.68 |
| **GeniusAug-f** | **81.82** | 95.99 | **50.42** | **50.81** | 79.40 | **80.57** | **73.17** |
Out-of-distribution (OOD) evaluations:
| | Huff->BBC | BBC->Huff | IMDB->SST2 | SST2->IMDB | avg. |
|------------|:----------:|:----------:|:----------:|:----------:|:----------:|
| none | 62.32 | 62.00 | 74.37 | 73.11 | 67.95 |
| EDA | 67.48 | 58.92 | 75.83 | 69.42 | 67.91 |
| BackT | 67.75 | 63.10 | 75.91 | 72.19 | 69.74 |
| MLM | 66.80 | 65.39 | 73.66 | 73.06 | 69.73 |
| C-MLM | 64.94 | **67.80** | 74.98 | 71.78 | 69.87 |
| LAMBADA | 68.57 | 52.79 | 75.24 | 76.04 | 68.16 |
| STA | 69.31 | 64.82 | 74.72 | 73.62 | 70.61 |
| **GeniusAug** | 74.87 | 66.85 | 76.02 | 74.76 | 73.13 |
| **GeniusAug-f** | **76.18** | 66.89 | **77.45** | **80.36** | **75.22** |
### BibTeX entry and citation info
TBD
|
viktor-enzell/wav2vec2-large-voxrex-swedish-4gram
|
viktor-enzell
| 2022-11-19T11:06:02Z | 5,719 | 5 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"hf-asr-leaderboard",
"sv",
"dataset:common_voice",
"dataset:NST_Swedish_ASR_Database",
"dataset:P4",
"dataset:The_Swedish_Culturomics_Gigaword_Corpus",
"license:cc0-1.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-26T13:32:57Z |
---
language: sv
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- hf-asr-leaderboard
- sv
license: cc0-1.0
datasets:
- common_voice
- NST_Swedish_ASR_Database
- P4
- The_Swedish_Culturomics_Gigaword_Corpus
model-index:
- name: Wav2vec 2.0 large VoxRex Swedish (C) with 4-gram
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1
type: common_voice
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 6.4723
---
# KBLab's wav2vec 2.0 large VoxRex Swedish (C) with 4-gram model
Training of the acoustic model is the work of KBLab. See [VoxRex-C](https://huggingface.co/KBLab/wav2vec2-large-voxrex-swedish) for more details. This repo extends the acoustic model with a social media 4-gram language model for boosted performance.
## Model description
VoxRex-C is extended with a 4-gram language model estimated from a subset extracted from [The Swedish Culturomics Gigaword Corpus](https://spraakbanken.gu.se/resurser/gigaword) from Språkbanken. The subset contains 40M words from the social media genre between 2010 and 2015.
## How to use
#### Simple usage example with pipeline
```python
import torch
from transformers import pipeline
# Load the model. Using GPU if available
model_name = 'viktor-enzell/wav2vec2-large-voxrex-swedish-4gram'
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
pipe = pipeline(model=model_name).to(device)
# Run inference on an audio file
output = pipe('path/to/audio.mp3')['text']
```
#### More verbose usage example with audio pre-processing
Example of transcribing 1% of the Common Voice test split. The model expects 16kHz audio, so audio with another sampling rate is resampled to 16kHz.
```python
from transformers import Wav2Vec2ForCTC, Wav2Vec2ProcessorWithLM
from datasets import load_dataset
import torch
import torchaudio.functional as F
# Import model and processor. Using GPU if available
model_name = 'viktor-enzell/wav2vec2-large-voxrex-swedish-4gram'
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device);
processor = Wav2Vec2ProcessorWithLM.from_pretrained(model_name)
# Import and process speech data
common_voice = load_dataset('common_voice', 'sv-SE', split='test[:1%]')
def speech_file_to_array(sample):
# Convert speech file to array and downsample to 16 kHz
sampling_rate = sample['audio']['sampling_rate']
sample['speech'] = F.resample(torch.tensor(sample['audio']['array']), sampling_rate, 16_000)
return sample
common_voice = common_voice.map(speech_file_to_array)
# Run inference
inputs = processor(common_voice['speech'], sampling_rate=16_000, return_tensors='pt', padding=True).to(device)
with torch.no_grad():
logits = model(**inputs).logits
transcripts = processor.batch_decode(logits.cpu().numpy()).text
```
## Training procedure
Text data for the n-gram model is pre-processed by removing characters not part of the wav2vec 2.0 vocabulary and uppercasing all characters. After pre-processing and storing each text sample on a new line in a text file, a [KenLM](https://github.com/kpu/kenlm) model is estimated. See [this tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) for more details.
## Evaluation results
The model was evaluated on the full Common Voice test set version 6.1. VoxRex-C achieved a WER of 9.03% without the language model and 6.47% with the language model.
|
KubiakJakub01/finetuned-distilbert-base-uncased
|
KubiakJakub01
| 2022-11-19T10:45:52Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-19T09:14:07Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: KubiakJakub01/finetuned-distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# KubiakJakub01/finetuned-distilbert-base-uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2767
- Validation Loss: 0.4326
- Train Accuracy: 0.8319
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1140, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.4680 | 0.4008 | 0.8378 | 0 |
| 0.3475 | 0.4017 | 0.8385 | 1 |
| 0.2767 | 0.4326 | 0.8319 | 2 |
### Framework versions
- Transformers 4.21.3
- TensorFlow 2.9.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
jonathanrichard13/pegasus-xsum-reddit-clean-4
|
jonathanrichard13
| 2022-11-19T10:22:51Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:reddit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-19T07:21:12Z |
---
tags:
- generated_from_trainer
datasets:
- reddit
metrics:
- rouge
model-index:
- name: pegasus-xsum-reddit-clean-4
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: reddit
type: reddit
args: default
metrics:
- name: Rouge1
type: rouge
value: 27.7525
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-xsum-reddit-clean-4
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on the reddit dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7697
- Rouge1: 27.7525
- Rouge2: 7.9823
- Rougel: 20.9276
- Rougelsum: 22.6678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.0594 | 1.0 | 1906 | 2.8489 | 27.9837 | 8.0824 | 20.9135 | 22.7261 |
| 2.861 | 2.0 | 3812 | 2.7793 | 27.8298 | 8.048 | 20.8653 | 22.6781 |
| 2.7358 | 3.0 | 5718 | 2.7697 | 27.7525 | 7.9823 | 20.9276 | 22.6678 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AndrewZeng/S2KG-base
|
AndrewZeng
| 2022-11-19T09:34:25Z | 108 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2210.08873",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-19T09:15:53Z |
# Semi-Supervised Knowledge-Grounded Pre-training for Task-Oriented Dialog Systems
We present our models for Track 2 of the SereTOD 2022 challenge, which is the first challenge of building semi-supervised and reinforced TOD systems on a large-scale real-world Chinese TOD dataset MobileCS. We build a knowledge-grounded dialog model, S2KG to formulate dialog history and local KB as input and predict the system response.
[This paper](https://arxiv.org/abs/2210.08873) has been accepted at the SereTOD 2022 Workshop, EMNLP 2022
## System Performance
Our system achieves the first place both in the automatic evaluation and human interaction, especially with higher BLEU (+7.64) and Success (+13.6%) than the second place. The evaluation results for both Track 1 and Track 2, which can be accessed via this [this link](https://docs.google.com/spreadsheets/d/1w28AKkG6Wjmuo15QlRlRyrnv859MT1ry0CHV8tFxY9o/edit#gid=0).
## S2KG for Generation
We release our S2KG-base model here. You can use this model for knowledge-grounded dialogue generation follow instructions [S2KG](https://github.com/Zeng-WH/S2KG).
|
AIGeorgeLi/distilbert-base-uncased-finetuned-emotion
|
AIGeorgeLi
| 2022-11-19T07:43:40Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-10T02:35:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9249666906714753
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2271
- Accuracy: 0.925
- F1: 0.9250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8554 | 1.0 | 250 | 0.3419 | 0.898 | 0.8943 |
| 0.2627 | 2.0 | 500 | 0.2271 | 0.925 | 0.9250 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
coderSounak/finetuned_twitter_targeted_insult_LSTM
|
coderSounak
| 2022-11-19T07:04:24Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-19T07:02:35Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned_twitter_targeted_insult_LSTM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_twitter_targeted_insult_LSTM
This model is a fine-tuned version of [LYTinn/lstm-finetuning-sentiment-model-3000-samples](https://huggingface.co/LYTinn/lstm-finetuning-sentiment-model-3000-samples) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6314
- Accuracy: 0.6394
- F1: 0.6610
- Precision: 0.6262
- Recall: 0.6998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
coderSounak/finetuned_twitter_profane_LSTM
|
coderSounak
| 2022-11-19T06:57:55Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-19T06:54:58Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned_twitter_profane_LSTM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_twitter_profane_LSTM
This model is a fine-tuned version of [LYTinn/lstm-finetuning-sentiment-model-3000-samples](https://huggingface.co/LYTinn/lstm-finetuning-sentiment-model-3000-samples) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5529
- Accuracy: 0.7144
- F1: 0.7380
- Precision: 0.7013
- Recall: 0.7788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
KellyShiiii/primer-crd3
|
KellyShiiii
| 2022-11-19T06:47:19Z | 92 | 0 |
transformers
|
[
"transformers",
"pytorch",
"led",
"text2text-generation",
"generated_from_trainer",
"dataset:crd3",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-17T04:19:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- crd3
metrics:
- rouge
model-index:
- name: primer-crd3
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: crd3
type: crd3
config: default
split: train[:500]
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1510358452879352
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# primer-crd3
This model is a fine-tuned version of [allenai/PRIMERA](https://huggingface.co/allenai/PRIMERA) on the crd3 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8193
- Rouge1: 0.1510
- Rouge2: 0.0279
- Rougel: 0.1251
- Rougelsum: 0.1355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 250 | 2.9569 | 0.1762 | 0.0485 | 0.1525 | 0.1605 |
| 1.7993 | 2.0 | 500 | 3.4079 | 0.1612 | 0.0286 | 0.1367 | 0.1444 |
| 1.7993 | 3.0 | 750 | 3.8193 | 0.1510 | 0.0279 | 0.1251 | 0.1355 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.8.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
sd-concepts-library/yoshimurachi
|
sd-concepts-library
| 2022-11-19T06:43:59Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-19T06:43:53Z |
---
license: mit
---
### Yoshimurachi on Stable Diffusion
This is the `<yoshi-san>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
meongracun/nmt-mpst-id-en-lr_1e-05-ep_10-seq_128_bs-16
|
meongracun
| 2022-11-19T06:16:46Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-19T05:45:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nmt-mpst-id-en-lr_1e-05-ep_10-seq_128_bs-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-mpst-id-en-lr_1e-05-ep_10-seq_128_bs-16
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8391
- Bleu: 0.0308
- Meteor: 0.1222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 1.0 | 404 | 3.1172 | 0.0194 | 0.0879 |
| 3.6071 | 2.0 | 808 | 2.9990 | 0.0251 | 0.1066 |
| 3.2935 | 3.0 | 1212 | 2.9471 | 0.027 | 0.1118 |
| 3.1963 | 4.0 | 1616 | 2.9105 | 0.0281 | 0.1145 |
| 3.1602 | 5.0 | 2020 | 2.8873 | 0.0286 | 0.1168 |
| 3.1602 | 6.0 | 2424 | 2.8686 | 0.0293 | 0.1187 |
| 3.1194 | 7.0 | 2828 | 2.8547 | 0.0301 | 0.1204 |
| 3.0906 | 8.0 | 3232 | 2.8464 | 0.0306 | 0.1214 |
| 3.0866 | 9.0 | 3636 | 2.8408 | 0.0307 | 0.1221 |
| 3.0672 | 10.0 | 4040 | 2.8391 | 0.0308 | 0.1222 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
meongracun/nmt-mpst-id-en-lr_0.0001-ep_10-seq_128_bs-16
|
meongracun
| 2022-11-19T06:06:39Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-19T05:35:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nmt-mpst-id-en-lr_0.0001-ep_10-seq_128_bs-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-mpst-id-en-lr_0.0001-ep_10-seq_128_bs-16
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1098
- Bleu: 0.0918
- Meteor: 0.2374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 1.0 | 404 | 2.7230 | 0.0372 | 0.1397 |
| 3.1248 | 2.0 | 808 | 2.5087 | 0.0495 | 0.1692 |
| 2.7527 | 3.0 | 1212 | 2.3751 | 0.062 | 0.1916 |
| 2.5311 | 4.0 | 1616 | 2.2955 | 0.0703 | 0.2068 |
| 2.4088 | 5.0 | 2020 | 2.2217 | 0.0785 | 0.2173 |
| 2.4088 | 6.0 | 2424 | 2.1797 | 0.0822 | 0.2223 |
| 2.297 | 7.0 | 2828 | 2.1409 | 0.0859 | 0.2283 |
| 2.2287 | 8.0 | 3232 | 2.1239 | 0.0891 | 0.2326 |
| 2.1918 | 9.0 | 3636 | 2.1117 | 0.0907 | 0.2357 |
| 2.1626 | 10.0 | 4040 | 2.1098 | 0.0918 | 0.2374 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
meongracun/nmt-mpst-id-en-lr_0.001-ep_10-seq_128_bs-16
|
meongracun
| 2022-11-19T06:06:24Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-19T05:34:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nmt-mpst-id-en-lr_0.001-ep_10-seq_128_bs-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-mpst-id-en-lr_0.001-ep_10-seq_128_bs-16
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6393
- Bleu: 0.1929
- Meteor: 0.3605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 1.0 | 404 | 2.1057 | 0.1016 | 0.2499 |
| 2.6026 | 2.0 | 808 | 1.7919 | 0.1333 | 0.2893 |
| 1.8228 | 3.0 | 1212 | 1.6738 | 0.1568 | 0.3205 |
| 1.4557 | 4.0 | 1616 | 1.6240 | 0.1677 | 0.3347 |
| 1.2482 | 5.0 | 2020 | 1.5976 | 0.1786 | 0.3471 |
| 1.2482 | 6.0 | 2424 | 1.5997 | 0.1857 | 0.3539 |
| 1.0644 | 7.0 | 2828 | 1.5959 | 0.188 | 0.3553 |
| 0.9399 | 8.0 | 3232 | 1.6128 | 0.19 | 0.3583 |
| 0.8668 | 9.0 | 3636 | 1.6260 | 0.1922 | 0.3593 |
| 0.8001 | 10.0 | 4040 | 1.6393 | 0.1929 | 0.3605 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
meongracun/nmt-mpst-id-en-lr_0.0001-ep_10-seq_128_bs-32
|
meongracun
| 2022-11-19T05:54:44Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-19T05:26:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nmt-mpst-id-en-lr_0.0001-ep_10-seq_128_bs-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-mpst-id-en-lr_0.0001-ep_10-seq_128_bs-32
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2914
- Bleu: 0.0708
- Meteor: 0.2054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 1.0 | 202 | 2.8210 | 0.0313 | 0.1235 |
| No log | 2.0 | 404 | 2.6712 | 0.0398 | 0.1478 |
| 3.0646 | 3.0 | 606 | 2.5543 | 0.0483 | 0.1661 |
| 3.0646 | 4.0 | 808 | 2.4735 | 0.0537 | 0.1751 |
| 2.6866 | 5.0 | 1010 | 2.4120 | 0.0591 | 0.1855 |
| 2.6866 | 6.0 | 1212 | 2.3663 | 0.0618 | 0.1906 |
| 2.6866 | 7.0 | 1414 | 2.3324 | 0.0667 | 0.1993 |
| 2.5034 | 8.0 | 1616 | 2.3098 | 0.0684 | 0.2023 |
| 2.5034 | 9.0 | 1818 | 2.2969 | 0.0696 | 0.2042 |
| 2.4271 | 10.0 | 2020 | 2.2914 | 0.0708 | 0.2054 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
meongracun/nmt-mpst-id-en-lr_1e-05-ep_10-seq_128_bs-32
|
meongracun
| 2022-11-19T05:41:31Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-19T05:13:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nmt-mpst-id-en-lr_1e-05-ep_10-seq_128_bs-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-mpst-id-en-lr_1e-05-ep_10-seq_128_bs-32
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9022
- Bleu: 0.0284
- Meteor: 0.1159
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 1.0 | 202 | 3.2021 | 0.0126 | 0.0683 |
| No log | 2.0 | 404 | 3.0749 | 0.0219 | 0.0958 |
| 3.559 | 3.0 | 606 | 3.0147 | 0.0252 | 0.1059 |
| 3.559 | 4.0 | 808 | 2.9738 | 0.0262 | 0.1094 |
| 3.2602 | 5.0 | 1010 | 2.9476 | 0.027 | 0.1113 |
| 3.2602 | 6.0 | 1212 | 2.9309 | 0.0278 | 0.1138 |
| 3.2602 | 7.0 | 1414 | 2.9153 | 0.0278 | 0.1139 |
| 3.1839 | 8.0 | 1616 | 2.9083 | 0.0285 | 0.116 |
| 3.1839 | 9.0 | 1818 | 2.9041 | 0.0284 | 0.1158 |
| 3.1574 | 10.0 | 2020 | 2.9022 | 0.0284 | 0.1159 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
meongracun/nmt-mpst-id-en-lr_0.0001-ep_20-seq_128_bs-16
|
meongracun
| 2022-11-19T05:30:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-19T04:31:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nmt-mpst-id-en-lr_0.0001-ep_20-seq_128_bs-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-mpst-id-en-lr_0.0001-ep_20-seq_128_bs-16
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8531
- Bleu: 0.1306
- Meteor: 0.2859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 1.0 | 404 | 2.7171 | 0.0374 | 0.14 |
| 3.1222 | 2.0 | 808 | 2.4821 | 0.0519 | 0.1723 |
| 2.7305 | 3.0 | 1212 | 2.3370 | 0.0663 | 0.1983 |
| 2.4848 | 4.0 | 1616 | 2.2469 | 0.0771 | 0.2158 |
| 2.3394 | 5.0 | 2020 | 2.1567 | 0.0857 | 0.227 |
| 2.3394 | 6.0 | 2424 | 2.1038 | 0.0919 | 0.2369 |
| 2.2007 | 7.0 | 2828 | 2.0403 | 0.0973 | 0.2449 |
| 2.1027 | 8.0 | 3232 | 2.0105 | 0.1066 | 0.2554 |
| 2.0299 | 9.0 | 3636 | 1.9725 | 0.1105 | 0.2606 |
| 1.9568 | 10.0 | 4040 | 1.9515 | 0.1147 | 0.2655 |
| 1.9568 | 11.0 | 4444 | 1.9274 | 0.118 | 0.2699 |
| 1.8986 | 12.0 | 4848 | 1.9142 | 0.1215 | 0.2739 |
| 1.8512 | 13.0 | 5252 | 1.8936 | 0.1243 | 0.2777 |
| 1.8258 | 14.0 | 5656 | 1.8841 | 0.1254 | 0.279 |
| 1.7854 | 15.0 | 6060 | 1.8792 | 0.1278 | 0.2827 |
| 1.7854 | 16.0 | 6464 | 1.8662 | 0.1274 | 0.2818 |
| 1.7598 | 17.0 | 6868 | 1.8604 | 0.1293 | 0.2834 |
| 1.7436 | 18.0 | 7272 | 1.8598 | 0.13 | 0.2849 |
| 1.7299 | 19.0 | 7676 | 1.8545 | 0.1308 | 0.2857 |
| 1.7168 | 20.0 | 8080 | 1.8531 | 0.1306 | 0.2859 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
elRivx/gBWoman
|
elRivx
| 2022-11-19T04:57:34Z | 0 | 1 | null |
[
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-11-19T04:40:07Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
# gBWoman
This is a Stable Diffusion custom model that bring to you a woman generated with non-licenced images.
The magic word is: gBWoman
If you enjoy my work, please consider supporting me:
[](https://www.buymeacoffee.com/elrivx)
Examples:
<img src=https://imgur.com/m3hOa5i.png width=30% height=30%>
<img src=https://imgur.com/u0Af9mX.png width=30% height=30%>
<img src=https://imgur.com/VpKDMMK.png width=30% height=30%>
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
meongracun/nmt-mpst-id-en-lr_1e-05-ep_30-seq_128_bs-16
|
meongracun
| 2022-11-19T04:27:12Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-19T03:01:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nmt-mpst-id-en-lr_1e-05-ep_30-seq_128_bs-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-mpst-id-en-lr_1e-05-ep_30-seq_128_bs-16
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5612
- Bleu: 0.0476
- Meteor: 0.1643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| No log | 1.0 | 404 | 3.1116 | 0.0198 | 0.0892 |
| 3.6027 | 2.0 | 808 | 2.9875 | 0.0255 | 0.1079 |
| 3.2803 | 3.0 | 1212 | 2.9296 | 0.0276 | 0.1135 |
| 3.1743 | 4.0 | 1616 | 2.8869 | 0.0287 | 0.116 |
| 3.1283 | 5.0 | 2020 | 2.8564 | 0.03 | 0.1208 |
| 3.1283 | 6.0 | 2424 | 2.8257 | 0.0309 | 0.1237 |
| 3.0739 | 7.0 | 2828 | 2.8007 | 0.0324 | 0.1281 |
| 3.0296 | 8.0 | 3232 | 2.7758 | 0.0334 | 0.131 |
| 3.0059 | 9.0 | 3636 | 2.7548 | 0.0346 | 0.134 |
| 2.965 | 10.0 | 4040 | 2.7349 | 0.0362 | 0.1371 |
| 2.965 | 11.0 | 4444 | 2.7176 | 0.0374 | 0.1403 |
| 2.9403 | 12.0 | 4848 | 2.6994 | 0.0382 | 0.1425 |
| 2.9166 | 13.0 | 5252 | 2.6841 | 0.0393 | 0.1448 |
| 2.9023 | 14.0 | 5656 | 2.6681 | 0.0404 | 0.1471 |
| 2.8742 | 15.0 | 6060 | 2.6548 | 0.0411 | 0.1508 |
| 2.8742 | 16.0 | 6464 | 2.6419 | 0.0422 | 0.1529 |
| 2.8523 | 17.0 | 6868 | 2.6286 | 0.0428 | 0.1538 |
| 2.8378 | 18.0 | 7272 | 2.6194 | 0.0434 | 0.1555 |
| 2.8258 | 19.0 | 7676 | 2.6095 | 0.0441 | 0.1568 |
| 2.8019 | 20.0 | 8080 | 2.6005 | 0.0447 | 0.1576 |
| 2.8019 | 21.0 | 8484 | 2.5938 | 0.0455 | 0.1598 |
| 2.7927 | 22.0 | 8888 | 2.5872 | 0.0459 | 0.1603 |
| 2.7846 | 23.0 | 9292 | 2.5800 | 0.0462 | 0.161 |
| 2.7775 | 24.0 | 9696 | 2.5757 | 0.0463 | 0.1621 |
| 2.77 | 25.0 | 10100 | 2.5712 | 0.0466 | 0.1624 |
| 2.7608 | 26.0 | 10504 | 2.5673 | 0.0469 | 0.1633 |
| 2.7608 | 27.0 | 10908 | 2.5645 | 0.0472 | 0.1634 |
| 2.7572 | 28.0 | 11312 | 2.5626 | 0.0474 | 0.1637 |
| 2.7578 | 29.0 | 11716 | 2.5617 | 0.0476 | 0.1641 |
| 2.7568 | 30.0 | 12120 | 2.5612 | 0.0476 | 0.1643 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
meongracun/nmt-mpst-id-en-lr_0.001-ep_30-seq_128_bs-16
|
meongracun
| 2022-11-19T04:24:07Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-19T02:57:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nmt-mpst-id-en-lr_0.001-ep_30-seq_128_bs-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-mpst-id-en-lr_0.001-ep_30-seq_128_bs-16
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3591
- Bleu: 0.2073
- Meteor: 0.3779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| No log | 1.0 | 404 | 2.0642 | 0.1068 | 0.2561 |
| 2.5607 | 2.0 | 808 | 1.7482 | 0.1392 | 0.299 |
| 1.7768 | 3.0 | 1212 | 1.6392 | 0.1614 | 0.325 |
| 1.4132 | 4.0 | 1616 | 1.6131 | 0.1728 | 0.3418 |
| 1.205 | 5.0 | 2020 | 1.5724 | 0.1854 | 0.3543 |
| 1.205 | 6.0 | 2424 | 1.5988 | 0.1897 | 0.3592 |
| 1.0069 | 7.0 | 2828 | 1.5839 | 0.1922 | 0.3618 |
| 0.8711 | 8.0 | 3232 | 1.6187 | 0.196 | 0.3678 |
| 0.7759 | 9.0 | 3636 | 1.6453 | 0.1968 | 0.3672 |
| 0.6838 | 10.0 | 4040 | 1.6837 | 0.1981 | 0.3685 |
| 0.6838 | 11.0 | 4444 | 1.7401 | 0.1976 | 0.3698 |
| 0.5903 | 12.0 | 4848 | 1.7686 | 0.2016 | 0.3712 |
| 0.5207 | 13.0 | 5252 | 1.8075 | 0.2026 | 0.3733 |
| 0.4712 | 14.0 | 5656 | 1.8665 | 0.2028 | 0.3743 |
| 0.4154 | 15.0 | 6060 | 1.9114 | 0.204 | 0.3746 |
| 0.4154 | 16.0 | 6464 | 1.9556 | 0.2036 | 0.376 |
| 0.3726 | 17.0 | 6868 | 1.9961 | 0.2011 | 0.374 |
| 0.326 | 18.0 | 7272 | 2.0437 | 0.2027 | 0.3739 |
| 0.2936 | 19.0 | 7676 | 2.0946 | 0.2038 | 0.3754 |
| 0.2671 | 20.0 | 8080 | 2.1319 | 0.2041 | 0.374 |
| 0.2671 | 21.0 | 8484 | 2.1717 | 0.2044 | 0.3756 |
| 0.2407 | 22.0 | 8888 | 2.2025 | 0.2045 | 0.3756 |
| 0.2143 | 23.0 | 9292 | 2.2375 | 0.2031 | 0.3734 |
| 0.1974 | 24.0 | 9696 | 2.2544 | 0.2057 | 0.3765 |
| 0.182 | 25.0 | 10100 | 2.2875 | 0.2057 | 0.3767 |
| 0.1686 | 26.0 | 10504 | 2.3153 | 0.2048 | 0.3762 |
| 0.1686 | 27.0 | 10908 | 2.3395 | 0.2063 | 0.3786 |
| 0.1548 | 28.0 | 11312 | 2.3493 | 0.2071 | 0.3783 |
| 0.145 | 29.0 | 11716 | 2.3569 | 0.2072 | 0.3781 |
| 0.1412 | 30.0 | 12120 | 2.3591 | 0.2073 | 0.3779 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Sebabrata/dof-Rai2-1
|
Sebabrata
| 2022-11-19T04:21:37Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-11-18T21:38:29Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: dof-Rai2-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dof-Rai2-1
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
meongracun/nmt-mpst-id-en-lr_0.0001-ep_30-seq_128_bs-32
|
meongracun
| 2022-11-19T04:11:12Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-19T02:53:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nmt-mpst-id-en-lr_0.0001-ep_30-seq_128_bs-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-mpst-id-en-lr_0.0001-ep_30-seq_128_bs-32
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8218
- Bleu: 0.1371
- Meteor: 0.294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 1.0 | 202 | 2.6357 | 0.042 | 0.1513 |
| No log | 2.0 | 404 | 2.4891 | 0.0526 | 0.1749 |
| 2.781 | 3.0 | 606 | 2.3754 | 0.062 | 0.1918 |
| 2.781 | 4.0 | 808 | 2.2946 | 0.0693 | 0.2047 |
| 2.4692 | 5.0 | 1010 | 2.2262 | 0.0779 | 0.2175 |
| 2.4692 | 6.0 | 1212 | 2.1729 | 0.0825 | 0.2231 |
| 2.4692 | 7.0 | 1414 | 2.1226 | 0.0897 | 0.2328 |
| 2.2484 | 8.0 | 1616 | 2.0789 | 0.0932 | 0.2381 |
| 2.2484 | 9.0 | 1818 | 2.0450 | 0.1007 | 0.2478 |
| 2.099 | 10.0 | 2020 | 2.0132 | 0.1041 | 0.255 |
| 2.099 | 11.0 | 2222 | 1.9818 | 0.1085 | 0.2584 |
| 2.099 | 12.0 | 2424 | 1.9608 | 0.113 | 0.2639 |
| 1.9729 | 13.0 | 2626 | 1.9422 | 0.1165 | 0.2689 |
| 1.9729 | 14.0 | 2828 | 1.9223 | 0.1186 | 0.2717 |
| 1.8885 | 15.0 | 3030 | 1.9114 | 0.1219 | 0.2757 |
| 1.8885 | 16.0 | 3232 | 1.9020 | 0.1238 | 0.2794 |
| 1.8885 | 17.0 | 3434 | 1.8827 | 0.1254 | 0.2793 |
| 1.8171 | 18.0 | 3636 | 1.8762 | 0.1278 | 0.2824 |
| 1.8171 | 19.0 | 3838 | 1.8686 | 0.1298 | 0.285 |
| 1.7597 | 20.0 | 4040 | 1.8595 | 0.1307 | 0.2864 |
| 1.7597 | 21.0 | 4242 | 1.8533 | 0.1328 | 0.2891 |
| 1.7597 | 22.0 | 4444 | 1.8453 | 0.1335 | 0.2901 |
| 1.7183 | 23.0 | 4646 | 1.8400 | 0.1347 | 0.2912 |
| 1.7183 | 24.0 | 4848 | 1.8342 | 0.135 | 0.2914 |
| 1.6893 | 25.0 | 5050 | 1.8308 | 0.1355 | 0.2919 |
| 1.6893 | 26.0 | 5252 | 1.8258 | 0.1357 | 0.2924 |
| 1.6893 | 27.0 | 5454 | 1.8248 | 0.1365 | 0.2933 |
| 1.6667 | 28.0 | 5656 | 1.8233 | 0.137 | 0.294 |
| 1.6667 | 29.0 | 5858 | 1.8223 | 0.1371 | 0.2941 |
| 1.6585 | 30.0 | 6060 | 1.8218 | 0.1371 | 0.294 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
peter2000/sdg_sentence_transformer
|
peter2000
| 2022-11-19T03:51:38Z | 14 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-19T02:57:29Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4015 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"scheduler": "constantlr",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Alred/t5-small-finetuned-summarization-cnn
|
Alred
| 2022-11-19T03:22:38Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-11-19T02:09:50Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-summarization-cnn
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: train[:2%]
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.4825
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-summarization-cnn
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0105
- Rouge1: 24.4825
- Rouge2: 9.1573
- Rougel: 19.7135
- Rougelsum: 22.2551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 2.0389 | 1.0 | 718 | 2.0150 | 24.4413 | 9.1782 | 19.7202 | 22.2225 |
| 1.9497 | 2.0 | 1436 | 2.0105 | 24.4825 | 9.1573 | 19.7135 | 22.2551 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
monideep2255/pseudolabeling-step2-F04
|
monideep2255
| 2022-11-19T02:31:41Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-19T00:07:19Z |
---
tags:
- generated_from_trainer
model-index:
- name: pseudolabeling-step2-F04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pseudolabeling-step2-F04
This model is a fine-tuned version of [yip-i/wav2vec2-pretrain-demo](https://huggingface.co/yip-i/wav2vec2-pretrain-demo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2502
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 74.4163 | 3.36 | 500 | 3.6878 | 1.0 |
| 3.3612 | 6.71 | 1000 | 3.5619 | 1.0 |
| 3.3127 | 10.07 | 1500 | 3.5773 | 1.0 |
| 3.2104 | 13.42 | 2000 | 3.5299 | 1.0 |
| 3.2067 | 16.78 | 2500 | 3.5704 | 0.9922 |
| 3.1511 | 20.13 | 3000 | 4.3842 | 1.0 |
| 3.0825 | 23.49 | 3500 | 4.2644 | 1.0 |
| 3.0959 | 26.85 | 4000 | 5.2502 | 1.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
juancopi81/distilgpt2-finetuned-yannic-test-1
|
juancopi81
| 2022-11-19T02:07:14Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-19T01:36:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-yannic-test-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-yannic-test-1
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 482 | 3.5938 |
| 3.6669 | 2.0 | 964 | 3.5534 |
| 3.5089 | 3.0 | 1446 | 3.5315 |
| 3.4295 | 4.0 | 1928 | 3.5197 |
| 3.3772 | 5.0 | 2410 | 3.5143 |
| 3.3383 | 6.0 | 2892 | 3.5110 |
| 3.3092 | 7.0 | 3374 | 3.5084 |
| 3.2857 | 8.0 | 3856 | 3.5082 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
dvitel/h0
|
dvitel
| 2022-11-19T02:02:54Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"distigpt2",
"hearthstone",
"dataset:dvitel/hearthstone",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-18T15:40:16Z |
---
license: apache-2.0
tags:
- distigpt2
- hearthstone
metrics:
- bleu
- dvitel/codebleu
- exact_match
- chrf
datasets:
- dvitel/hearthstone
model-index:
- name: h0
results:
- task:
type: text-generation
name: Python Code Synthesis
dataset:
type: dvitel/hearthstone
name: HearthStone
split: test
metrics:
- type: exact_match
value: 0.19696969696969696
name: Exact Match
- type: bleu
value: 0.8881228393983
name: BLEU
- type: dvitel/codebleu
value: 0.6764180663401291
name: CodeBLEU
- type: chrf
value: 90.6099642899634
name: chrF
---
# h0
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on [hearthstone](https://huggingface.co/datasets/dvitel/hearthstone) dataset.
[GitHub repo](https://github.com/dvitel/nlp-sem-parsing/blob/master/h0.py).
It achieves the following results on the evaluation set:
- Loss: 0.3117
- Exact Match: 0.1970
- Bleu: 0.9085
- Codebleu: 0.7341
- Ngram Match Score: 0.7211
- Weighted Ngram Match Score: 0.7299
- Syntax Match Score: 0.7536
- Dataflow Match Score: 0.7317
- Chrf: 92.8689
## Model description
DistilGPT2 fine-tuned on HearthStone dataset for 200 epochs
## Intended uses & limitations
HearthStone card code synthesis.
## Training and evaluation data
See split of [hearthstone](https://huggingface.co/datasets/dvitel/hearthstone) dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 17
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | Bleu | Codebleu | Ngram Match Score | Weighted Ngram Match Score | Syntax Match Score | Dataflow Match Score | Chrf |
|:-------------:|:------:|:-----:|:---------------:|:-----------:|:------:|:--------:|:-----------------:|:--------------------------:|:------------------:|:--------------------:|:-------:|
| 0.543 | 11.94 | 1600 | 0.2701 | 0.0152 | 0.8552 | 0.6144 | 0.6027 | 0.6136 | 0.6431 | 0.5982 | 89.0280 |
| 0.1459 | 23.88 | 3200 | 0.2408 | 0.0909 | 0.8841 | 0.6733 | 0.6610 | 0.6719 | 0.7210 | 0.6393 | 91.2517 |
| 0.0801 | 35.82 | 4800 | 0.2498 | 0.1515 | 0.8966 | 0.6999 | 0.6954 | 0.7054 | 0.7326 | 0.6662 | 92.1356 |
| 0.0498 | 47.76 | 6400 | 0.2569 | 0.1818 | 0.9012 | 0.7015 | 0.7022 | 0.7114 | 0.7428 | 0.6496 | 92.4668 |
| 0.0323 | 59.7 | 8000 | 0.2732 | 0.1667 | 0.9044 | 0.7241 | 0.7025 | 0.7123 | 0.7551 | 0.7266 | 92.5429 |
| 0.0214 | 71.64 | 9600 | 0.2896 | 0.1667 | 0.9034 | 0.7228 | 0.7101 | 0.7195 | 0.7670 | 0.6945 | 92.4258 |
| 0.015 | 83.58 | 11200 | 0.2870 | 0.1667 | 0.9046 | 0.7292 | 0.7137 | 0.7228 | 0.7667 | 0.7137 | 92.5979 |
| 0.0121 | 95.52 | 12800 | 0.2907 | 0.1667 | 0.9075 | 0.7287 | 0.7198 | 0.7297 | 0.7696 | 0.6958 | 92.7074 |
| 0.0093 | 107.46 | 14400 | 0.2976 | 0.1667 | 0.9073 | 0.7365 | 0.7134 | 0.7238 | 0.7732 | 0.7356 | 92.8347 |
| 0.0073 | 119.4 | 16000 | 0.3037 | 0.1818 | 0.9085 | 0.7326 | 0.7154 | 0.7241 | 0.7529 | 0.7381 | 92.8343 |
| 0.006 | 131.34 | 17600 | 0.3047 | 0.1970 | 0.9104 | 0.7410 | 0.7230 | 0.7312 | 0.7667 | 0.7433 | 92.8286 |
| 0.005 | 143.28 | 19200 | 0.3080 | 0.1970 | 0.9088 | 0.7377 | 0.7232 | 0.7316 | 0.7746 | 0.7214 | 92.8035 |
| 0.0044 | 155.22 | 20800 | 0.3071 | 0.1970 | 0.9076 | 0.7343 | 0.7196 | 0.7283 | 0.7783 | 0.7112 | 92.7742 |
| 0.004 | 167.16 | 22400 | 0.3097 | 0.1970 | 0.9082 | 0.7440 | 0.7236 | 0.7334 | 0.7601 | 0.7587 | 92.8117 |
| 0.0035 | 179.1 | 24000 | 0.3111 | 0.1970 | 0.9080 | 0.7355 | 0.7204 | 0.7295 | 0.7616 | 0.7304 | 92.7990 |
| 0.0036 | 191.04 | 25600 | 0.3117 | 0.1970 | 0.9085 | 0.7341 | 0.7211 | 0.7299 | 0.7536 | 0.7317 | 92.8689 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
dvitel/h2
|
dvitel
| 2022-11-19T02:02:50Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"distigpt2",
"hearthstone",
"dataset:dvitel/hearthstone",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-18T21:25:37Z |
---
license: apache-2.0
tags:
- distigpt2
- hearthstone
metrics:
- bleu
- dvitel/codebleu
- exact_match
- chrf
datasets:
- dvitel/hearthstone
model-index:
- name: h0
results:
- task:
type: text-generation
name: Python Code Synthesis
dataset:
type: dvitel/hearthstone
name: HearthStone
split: test
metrics:
- type: exact_match
value: 0.0
name: Exact Match
- type: bleu
value: 0.6082316056517667
name: BLEU
- type: dvitel/codebleu
value: 0.36984242128954287
name: CodeBLEU
- type: chrf
value: 68.77878158023694
name: chrF
---
# h2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on [hearthstone](https://huggingface.co/datasets/dvitel/hearthstone).
[GitHub repo](https://github.com/dvitel/nlp-sem-parsing/blob/master/h2.py).
It achieves the following results on the evaluation set:
- Loss: 2.5771
- Exact Match: 0.0
- Bleu: 0.6619
- Codebleu: 0.5374
- Ngram Match Score: 0.4051
- Weighted Ngram Match Score: 0.4298
- Syntax Match Score: 0.5605
- Dataflow Match Score: 0.7541
- Chrf: 73.9625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 17
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | Bleu | Codebleu | Ngram Match Score | Weighted Ngram Match Score | Syntax Match Score | Dataflow Match Score | Chrf |
|:-------------:|:------:|:-----:|:---------------:|:-----------:|:------:|:--------:|:-----------------:|:--------------------------:|:------------------:|:--------------------:|:-------:|
| 1.2052 | 11.94 | 1600 | 1.2887 | 0.0 | 0.6340 | 0.4427 | 0.3384 | 0.3614 | 0.5263 | 0.5446 | 70.8004 |
| 0.3227 | 23.88 | 3200 | 1.4484 | 0.0 | 0.6575 | 0.5050 | 0.3767 | 0.3995 | 0.5955 | 0.6485 | 72.9553 |
| 0.205 | 35.82 | 4800 | 1.6392 | 0.0 | 0.6598 | 0.5174 | 0.3788 | 0.4022 | 0.5821 | 0.7063 | 73.2766 |
| 0.1392 | 47.76 | 6400 | 1.8219 | 0.0 | 0.6584 | 0.5279 | 0.3922 | 0.4159 | 0.5742 | 0.7294 | 73.5022 |
| 0.0979 | 59.7 | 8000 | 1.9416 | 0.0 | 0.6635 | 0.5305 | 0.4012 | 0.4248 | 0.5699 | 0.7261 | 73.8081 |
| 0.0694 | 71.64 | 9600 | 2.1793 | 0.0 | 0.6593 | 0.5400 | 0.4027 | 0.4271 | 0.5562 | 0.7739 | 73.6746 |
| 0.0512 | 83.58 | 11200 | 2.2547 | 0.0 | 0.6585 | 0.5433 | 0.4040 | 0.4283 | 0.5486 | 0.7921 | 73.7670 |
| 0.0399 | 95.52 | 12800 | 2.3037 | 0.0 | 0.6585 | 0.5354 | 0.4040 | 0.4282 | 0.5454 | 0.7640 | 73.7431 |
| 0.0316 | 107.46 | 14400 | 2.4113 | 0.0 | 0.6577 | 0.5294 | 0.4006 | 0.4257 | 0.5504 | 0.7409 | 73.7004 |
| 0.0254 | 119.4 | 16000 | 2.4407 | 0.0 | 0.6607 | 0.5412 | 0.4041 | 0.4285 | 0.5598 | 0.7723 | 73.8828 |
| 0.0208 | 131.34 | 17600 | 2.4993 | 0.0 | 0.6637 | 0.5330 | 0.4042 | 0.4286 | 0.5684 | 0.7310 | 74.1760 |
| 0.0176 | 143.28 | 19200 | 2.5138 | 0.0 | 0.6627 | 0.5434 | 0.4050 | 0.4295 | 0.5620 | 0.7772 | 74.0546 |
| 0.0158 | 155.22 | 20800 | 2.5589 | 0.0 | 0.6616 | 0.5347 | 0.4044 | 0.4291 | 0.5512 | 0.7541 | 73.9516 |
| 0.0147 | 167.16 | 22400 | 2.5554 | 0.0 | 0.6620 | 0.5354 | 0.4049 | 0.4295 | 0.5630 | 0.7442 | 73.9461 |
| 0.0134 | 179.1 | 24000 | 2.5696 | 0.0 | 0.6607 | 0.5395 | 0.4046 | 0.4293 | 0.5602 | 0.7640 | 73.8383 |
| 0.0135 | 191.04 | 25600 | 2.5771 | 0.0 | 0.6619 | 0.5374 | 0.4051 | 0.4298 | 0.5605 | 0.7541 | 73.9625 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
andrewzhang505/doom_deathmatch_bots
|
andrewzhang505
| 2022-11-19T00:58:04Z | 4 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-27T23:12:48Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- metrics:
- type: mean_reward
value: 69.40 +/- 4.29
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_deathmatch_bots
type: doom_deathmatch_bots
---
A(n) **APPO** model trained on the **doom_deathmatch_bots** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
shi-labs/nat-tiny-in1k-224
|
shi-labs
| 2022-11-18T23:12:12Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"nat",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2204.07143",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-18T22:07:29Z |
---
license: mit
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# NAT (tiny variant)
NAT-Tiny trained on ImageNet-1K at 224x224 resolution.
It was introduced in the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Hassani et al. and first released in [this repository](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer).
## Model description
NAT is a hierarchical vision transformer based on Neighborhood Attention (NA).
Neighborhood Attention is a restricted self attention pattern in which each token's receptive field is limited to its nearest neighboring pixels.
NA is a sliding-window attention patterns, and as a result is highly flexible and maintains translational equivariance.
NA is implemented in PyTorch implementations through its extension, [NATTEN](https://github.com/SHI-Labs/NATTEN/).

[Source](https://paperswithcode.com/paper/neighborhood-attention-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=nat) to look for
fine-tuned versions on a task that interests you.
### Example
Here is how to use this model to classify an image from the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, NatForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoImageProcessor.from_pretrained("shi-labs/nat-tiny-in1k-224")
model = NatForImageClassification.from_pretrained("shi-labs/nat-tiny-in1k-224")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more examples, please refer to the [documentation](https://huggingface.co/transformers/model_doc/nat.html#).
### Requirements
Other than transformers, this model requires the [NATTEN](https://shi-labs.com/natten) package.
If you're on Linux, you can refer to [shi-labs.com/natten](https://shi-labs.com/natten) for instructions on installing with pre-compiled binaries (just select your torch build to get the correct wheel URL).
You can alternatively use `pip install natten` to compile on your device, which may take up to a few minutes.
Mac users only have the latter option (no pre-compiled binaries).
Refer to [NATTEN's GitHub](https://github.com/SHI-Labs/NATTEN/) for more information.
### BibTeX entry and citation info
```bibtex
@article{hassani2022neighborhood,
title = {Neighborhood Attention Transformer},
author = {Ali Hassani and Steven Walton and Jiachen Li and Shen Li and Humphrey Shi},
year = 2022,
url = {https://arxiv.org/abs/2204.07143},
eprint = {2204.07143},
archiveprefix = {arXiv},
primaryclass = {cs.CV}
}
```
|
shi-labs/dinat-tiny-in1k-224
|
shi-labs
| 2022-11-18T23:11:09Z | 99 | 0 |
transformers
|
[
"transformers",
"pytorch",
"dinat",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2209.15001",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-18T22:07:23Z |
---
license: mit
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# DiNAT (tiny variant)
DiNAT-Tiny trained on ImageNet-1K at 224x224 resolution.
It was introduced in the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Hassani et al. and first released in [this repository](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer).
## Model description
DiNAT is a hierarchical vision transformer based on Neighborhood Attention (NA) and its dilated variant (DiNA).
Neighborhood Attention is a restricted self attention pattern in which each token's receptive field is limited to its nearest neighboring pixels.
NA and DiNA are therefore sliding-window attention patterns, and as a result are highly flexible and maintain translational equivariance.
They come with PyTorch implementations through the [NATTEN](https://github.com/SHI-Labs/NATTEN/) package.

[Source](https://paperswithcode.com/paper/dilated-neighborhood-attention-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=dinat) to look for
fine-tuned versions on a task that interests you.
### Example
Here is how to use this model to classify an image from the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, DinatForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoImageProcessor.from_pretrained("shi-labs/dinat-tiny-in1k-224")
model = DinatForImageClassification.from_pretrained("shi-labs/dinat-tiny-in1k-224")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more examples, please refer to the [documentation](https://huggingface.co/transformers/model_doc/dinat.html#).
### Requirements
Other than transformers, this model requires the [NATTEN](https://shi-labs.com/natten) package.
If you're on Linux, you can refer to [shi-labs.com/natten](https://shi-labs.com/natten) for instructions on installing with pre-compiled binaries (just select your torch build to get the correct wheel URL).
You can alternatively use `pip install natten` to compile on your device, which may take up to a few minutes.
Mac users only have the latter option (no pre-compiled binaries).
Refer to [NATTEN's GitHub](https://github.com/SHI-Labs/NATTEN/) for more information.
### BibTeX entry and citation info
```bibtex
@article{hassani2022dilated,
title = {Dilated Neighborhood Attention Transformer},
author = {Ali Hassani and Humphrey Shi},
year = 2022,
url = {https://arxiv.org/abs/2209.15001},
eprint = {2209.15001},
archiveprefix = {arXiv},
primaryclass = {cs.CV}
}
```
|
shi-labs/dinat-base-in1k-224
|
shi-labs
| 2022-11-18T23:07:43Z | 90 | 0 |
transformers
|
[
"transformers",
"pytorch",
"dinat",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2209.15001",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-18T22:04:27Z |
---
license: mit
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# DiNAT (base variant)
DiNAT-Base trained on ImageNet-1K at 224x224 resolution.
It was introduced in the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Hassani et al. and first released in [this repository](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer).
## Model description
DiNAT is a hierarchical vision transformer based on Neighborhood Attention (NA) and its dilated variant (DiNA).
Neighborhood Attention is a restricted self attention pattern in which each token's receptive field is limited to its nearest neighboring pixels.
NA and DiNA are therefore sliding-window attention patterns, and as a result are highly flexible and maintain translational equivariance.
They come with PyTorch implementations through the [NATTEN](https://github.com/SHI-Labs/NATTEN/) package.

[Source](https://paperswithcode.com/paper/dilated-neighborhood-attention-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=dinat) to look for
fine-tuned versions on a task that interests you.
### Example
Here is how to use this model to classify an image from the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, DinatForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoImageProcessor.from_pretrained("shi-labs/dinat-base-in1k-224")
model = DinatForImageClassification.from_pretrained("shi-labs/dinat-base-in1k-224")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more examples, please refer to the [documentation](https://huggingface.co/transformers/model_doc/dinat.html#).
### Requirements
Other than transformers, this model requires the [NATTEN](https://shi-labs.com/natten) package.
If you're on Linux, you can refer to [shi-labs.com/natten](https://shi-labs.com/natten) for instructions on installing with pre-compiled binaries (just select your torch build to get the correct wheel URL).
You can alternatively use `pip install natten` to compile on your device, which may take up to a few minutes.
Mac users only have the latter option (no pre-compiled binaries).
Refer to [NATTEN's GitHub](https://github.com/SHI-Labs/NATTEN/) for more information.
### BibTeX entry and citation info
```bibtex
@article{hassani2022dilated,
title = {Dilated Neighborhood Attention Transformer},
author = {Ali Hassani and Humphrey Shi},
year = 2022,
url = {https://arxiv.org/abs/2209.15001},
eprint = {2209.15001},
archiveprefix = {arXiv},
primaryclass = {cs.CV}
}
```
|
monideep2255/pseudolabeling-step1-F04
|
monideep2255
| 2022-11-18T23:04:06Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-18T18:47:27Z |
---
tags:
- generated_from_trainer
model-index:
- name: pseudolabeling-step1-F04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pseudolabeling-step1-F04
This model is a fine-tuned version of [yongjian/wav2vec2-large-a](https://huggingface.co/yongjian/wav2vec2-large-a) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5392
- Wer: 0.8870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 21.4261 | 1.71 | 500 | 3.2064 | 1.0 |
| 2.9275 | 3.42 | 1000 | 2.6461 | 1.2637 |
| 2.49 | 5.14 | 1500 | 2.0627 | 1.2527 |
| 1.8582 | 6.85 | 2000 | 1.6367 | 1.1978 |
| 1.5071 | 8.56 | 2500 | 1.2845 | 1.1743 |
| 1.2181 | 10.27 | 3000 | 1.1395 | 1.1586 |
| 1.0386 | 11.99 | 3500 | 1.0155 | 1.0926 |
| 0.9307 | 13.7 | 4000 | 0.8144 | 1.0628 |
| 0.8073 | 15.41 | 4500 | 0.7666 | 1.1146 |
| 0.7209 | 17.12 | 5000 | 0.7020 | 1.0911 |
| 0.6618 | 18.84 | 5500 | 0.6829 | 1.0612 |
| 0.6079 | 20.55 | 6000 | 0.6023 | 0.9937 |
| 0.5242 | 22.26 | 6500 | 0.6057 | 0.9827 |
| 0.4848 | 23.97 | 7000 | 0.5802 | 0.9435 |
| 0.4602 | 25.68 | 7500 | 0.5376 | 0.9027 |
| 0.446 | 27.4 | 8000 | 0.5351 | 0.8964 |
| 0.4245 | 29.11 | 8500 | 0.5392 | 0.8870 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
Jaiti/distilbert-base-uncased-finetuned-ner
|
Jaiti
| 2022-11-18T22:56:48Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-11T21:25:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
OSalem99/a2c-AntBulletEnv-v0
|
OSalem99
| 2022-11-18T22:42:18Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-18T22:41:12Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 953.99 +/- 100.86
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
elRivx/80sFashionRobot
|
elRivx
| 2022-11-18T22:18:49Z | 0 | 9 | null |
[
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-11-10T15:04:50Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
# 80sFashionRobot
Do you remember when the robots were fashion icons? Do you like the 80s style? This model is for you!
Some recomendations: the magic word for your prompts is 80sFashionRobot .In some times, you would put some prompts like:
request, in 80sFashionRobot style
or
an illustration of request, in 80sFashionRobot style
PS: you can replace 'request' with a person, character, etc.
If you enjoy my work, please consider supporting me:
[](https://www.buymeacoffee.com/elrivx)
Examples:
<img src=https://imgur.com/kXLw2a2.png width=30% height=30%>
<img src=https://imgur.com/Ukip4RT.png width=30% height=30%>
<img src=https://imgur.com/j6KyuIk.png width=30% height=30%>
<img src=https://imgur.com/uyabBWZ.png width=30% height=30%>
<img src=https://imgur.com/fQTcr20.png width=30% height=30%>
<img src=https://imgur.com/ZzvXZob.png width=30% height=30%>
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
elRivx/DMVC2
|
elRivx
| 2022-11-18T22:16:09Z | 0 | 3 | null |
[
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-11-03T15:14:43Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
# DMVC2
This is an own SD trainee with an 2000s videogame illustrations as a style.
If you wanna test it, you can put this word on the prompt: DMVC2 . Sometimes you must put before things like 'an illustration of'
If you enjoy my work, please consider supporting me:
[](https://www.buymeacoffee.com/elrivx)
Examples:
<img src=https://imgur.com/lrD4Q5s.png width=30% height=30%>
<img src=https://imgur.com/DSW8Ein.png width=30% height=30%>
<img src=https://imgur.com/Z4T2eYj.png width=30% height=30%>
<img src=https://imgur.com/EzidtGk.png width=30% height=30%>
<img src=https://imgur.com/1NHdWhc.png width=30% height=30%>
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
elRivx/megaPals
|
elRivx
| 2022-11-18T22:14:49Z | 0 | 7 | null |
[
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-11-07T18:26:15Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
# megaPals
Do you remember the superhero vintage animated series? Do you like the 70s style? This model is for you!
Some recomendations: the magic word for your prompts is megaPals . In some times, you would put some prompts like:
request, in megaPals style
or
a cartoon of request, in megaPals style
PS: you can replace 'request' with a person, character, etc.
If you enjoy my work, please consider supporting me:
[](https://www.buymeacoffee.com/elrivx)
Examples:
<img src=https://imgur.com/Oqf58NU.png width=30% height=30%>
<img src=https://imgur.com/1RZWk6N.png width=30% height=30%>
<img src=https://imgur.com/XLXVp10.png width=30% height=30%>
<img src=https://imgur.com/E7FKp6m.png width=30% height=30%>
<img src=https://imgur.com/WEhd4Hh.png width=30% height=30%>
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
racro/sentiment-browser-extension
|
racro
| 2022-11-18T21:51:15Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-16T06:57:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sentiment-browser-extension
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-browser-extension
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7068
- Accuracy: 0.8516
- F1: 0.8690
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
zthrx/painting_generator
|
zthrx
| 2022-11-18T21:46:53Z | 13 | 18 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"image-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-11-18T08:39:12Z |
---
language:
- en
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/zthrx/painting_generator/resolve/main/painting3.jpg"
tags:
- stable-diffusion
- text-to-image
- image-to-image
- diffusers
---
### Painting Generator
**Convert your photos and artworks into paintings.**
Use **concep** to activate for example: concep, forest, trees etc.
Model trained on brushstrokes, you don't need to put any artist names or style to get nice results.
Best to use in img2img mode and inpainting
Download the ckpt file from "files and versions" tab into the stable diffusion models folder of your web-ui of choice







license: creativeml-openrail-m
|
bwhite5311/NLP-sentiment-project-2000-samples
|
bwhite5311
| 2022-11-18T20:50:13Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-18T11:25:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: NLP-sentiment-project-2000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9715
- name: F1
type: f1
value: 0.9716558925907509
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-sentiment-project-2000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1038
- Accuracy: 0.9715
- F1: 0.9717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
princeton-nlp/mabel-bert-base-uncased
|
princeton-nlp
| 2022-11-18T20:47:40Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"gender-bias",
"arxiv:2210.14975",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-26T05:56:10Z |
---
tags:
- gender-bias
- bert
---
# Model Card for `mabel-bert-base-uncased`
# Model Description
This is the model for MABEL, as described in our paper, "[MABEL: Attenuating Gender Bias using Textual Entailment Data](https://arxiv.org/abs/2210.14975)". MABEL is trained from an underlying `bert-base-uncased` backbone, and demonstrates a good bias-performance tradeoff across a suite of intrinsic and extrinsic bias metrics.
|
ahmadmwali/finetuning-sentiment-hausa2
|
ahmadmwali
| 2022-11-18T20:34:22Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-09T19:52:19Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-hausa2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-hausa2
This model is a fine-tuned version of [Davlan/xlm-roberta-base-finetuned-hausa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6335
- Accuracy: 0.7310
- F1: 0.7296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
famube/autotrain-documentos-oficiais-2092367351
|
famube
| 2022-11-18T20:33:18Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"token-classification",
"pt",
"dataset:famube/autotrain-data-documentos-oficiais",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-14T15:52:11Z |
---
tags:
- autotrain
- token-classification
language:
- pt
widget:
- text: "I love AutoTrain 🤗"
datasets:
- famube/autotrain-data-documentos-oficiais
co2_eq_emissions:
emissions: 6.461431564881563
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 2092367351
- CO2 Emissions (in grams): 6.4614
## Validation Metrics
- Loss: 0.059
- Accuracy: 0.986
- Precision: 0.000
- Recall: 0.000
- F1: 0.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/famube/autotrain-documentos-oficiais-2092367351
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("famube/autotrain-documentos-oficiais-2092367351", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("famube/autotrain-documentos-oficiais-2092367351", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
cyburn/laze_opera_panda
|
cyburn
| 2022-11-18T18:57:58Z | 0 | 0 | null |
[
"license:unknown",
"region:us"
] | null | 2022-11-18T18:29:31Z |
---
license: unknown
---
# Soda Stream finetuned style Model
Produced from publicly available pictures in landscape, portrait and square format.
## Model info
The models included was trained on "multi-resolution" images.
## Using the model
* common subject prompt tokens: `<wathever> by laze opera panda`
## Example prompts
`woman near a fountain by laze opera panda`:
<img src="https://huggingface.co/cyburn/laze_opera_panda/resolve/main/1.png" alt="Picture." width="500"/>
`woman in taxi by laze opera panda`:
<img src="https://huggingface.co/cyburn/laze_opera_panda/resolve/main/2.png" alt="Picture." width="500"/>
`man portrait by laze opera panda`:
<img src="https://huggingface.co/cyburn/laze_opera_panda/resolve/main/3.png" alt="Picture." width="500"/>
|
espnet/realzza-meld-asr-hubert-transformer
|
espnet
| 2022-11-18T18:40:43Z | 0 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"spoken-language-understanding",
"en",
"dataset:meld",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-11-18T17:10:56Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
- spoken-language-understanding
language: en
datasets:
- meld
license: cc-by-4.0
---
# ESPnet2: Meld Recipe
## Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd egs2/meld/asr1/
./run.sh
```
## Environments
- date: `Thu Nov 10 09:07:40 EST 2022`
- python version: `3.8.6 (default, Dec 17 2020, 16:57:01) [GCC 10.2.0]`
- espnet version: `espnet 202207`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `a7bd6522b32ec6472c13f6a2289dcdff4a846c12`
- Commit date: `Wed Sep 14 08:34:27 2022 -0400`
## asr_train_asr_hubert_transformer_adam_specaug_meld_raw_en_bpe850
- ASR config: conf/tuning/train_asr_hubert_transformer_adam_specaug_meld.yaml
- token_type: bpe
- keep_nbest_models: 5
|dataset|Snt|Emotion Classification (%)|
|---|---|---|
|decoder_asr_asr_model_valid.acc.ave_5best/test|2608|39.22|
|decoder_asr_asr_model_valid.acc.ave_5best/valid|1104|42.64|
### ASR results
#### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decoder_asr_asr_model_valid.acc.ave_5best/test|2608|24809|55.5|28.0|16.5|8.4|52.9|96.5|
|decoder_asr_asr_model_valid.acc.ave_5best/valid|1104|10171|55.3|29.4|15.3|7.0|51.7|96.2|
#### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decoder_asr_asr_model_valid.acc.ave_5best/test|2608|120780|71.1|10.7|18.2|10.6|39.5|96.5|
|decoder_asr_asr_model_valid.acc.ave_5best/valid|1104|49323|71.3|11.1|17.6|9.4|38.1|96.2|
#### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decoder_asr_asr_model_valid.acc.ave_5best/test|2608|35287|57.6|21.8|20.5|7.8|50.2|96.5|
|decoder_asr_asr_model_valid.acc.ave_5best/valid|1104|14430|57.4|23.2|19.4|6.1|48.6|96.2|
|
cyburn/soda_stream
|
cyburn
| 2022-11-18T18:20:38Z | 0 | 0 | null |
[
"license:unknown",
"region:us"
] | null | 2022-11-18T15:47:05Z |
---
license: unknown
---
# Soda Stream finetuned style Model
Produced from publicly available pictures in landscape, portrait and square format.
## Model info
The models included was trained on "multi-resolution" images.
## Using the model
* common subject prompt tokens: `<wathever> by soda stream`
## Example prompts
`woman near a fountain by soda stream`:
<img src="https://huggingface.co/cyburn/soda_stream/resolve/main/1.png" alt="Picture." width="500"/>
`woman in taxi by soda stream`:
<img src="https://huggingface.co/cyburn/soda_stream/resolve/main/2.png" alt="Picture." width="500"/>
`woman portrait by soda stream`:
<img src="https://huggingface.co/cyburn/soda_stream/resolve/main/3.png" alt="Picture." width="500"/>
|
thomasfm/distilbert-base-uncased-finetuned-ner-nlp
|
thomasfm
| 2022-11-18T18:09:05Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-18T17:43:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner-nlp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner-nlp
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0812
- Precision: 0.8835
- Recall: 0.9039
- F1: 0.8936
- Accuracy: 0.9804
## Model description
### Essential info about tagged entities
- geo: Geographical Entity
- gpe: Geopolitical Entity
- tim: Time Indicator
### Label description
- Label 0: 'B-geo',
- Label 1: 'B-gpe',
- Label 2: 'B-tim',
- Label 3: 'I-geo',
- Label 4: 'I-gpe',
- Label 5: 'I-tim',
- Label 6: 'O'
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0384 | 1.0 | 1781 | 0.0671 | 0.8770 | 0.9038 | 0.8902 | 0.9799 |
| 0.0295 | 2.0 | 3562 | 0.0723 | 0.8844 | 0.8989 | 0.8915 | 0.9804 |
| 0.023 | 3.0 | 5343 | 0.0731 | 0.8787 | 0.9036 | 0.8910 | 0.9800 |
| 0.0186 | 4.0 | 7124 | 0.0812 | 0.8835 | 0.9039 | 0.8936 | 0.9804 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
GabCcr99/Clasificador-Ojos-XD
|
GabCcr99
| 2022-11-18T17:54:36Z | 187 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-18T17:49:49Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Clasificador-Ojos-XD
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9696969985961914
---
# Clasificador-Ojos-XD
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
|
eimiss/EimisSemiRealistic
|
eimiss
| 2022-11-18T16:10:42Z | 0 | 43 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-11-18T09:21:10Z |
---
thumbnail: https://imgur.com/DkGWTA2.png
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Diffusion model
This model is trained with detailed semi realistic images via my anime model.
# Sample generations
This model is made to get semi realistic, realistic results with a lot of detail.
```
Positive:1girl, aura, blue_fire, electricity, energy, fire, flame, glowing, glowing_eyes, green_eyes, hitodama, horns, lightning, long_hair, magic, male_focus, solo, spirit
Negative:lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls))
Steps: 20, Sampler: DPM++ 2S a, CFG scale: 8, Seed: 2526294281, Size: 896x768
```
<img src=https://imgur.com/HHdOmIF.jpg width=75% height=75%>
```
Positive: a girl,Phoenix girl,fluffy hair,war,a hell on earth, Beautiful and detailed costume, blue glowing eyes, masterpiece, (detailed hands), (glowing), twintails, smiling, beautiful detailed white gloves, (upper_body), (realistic)
Negative: lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls))
Steps: 20, Sampler: DPM++ 2S a Karras, CFG scale: 8, Seed: 2495938777/2495938779, Size: 896x768
```
<img src=https://imgur.com/bHiTlAu.png width=75% height=75%>
<img src=https://imgur.com/dGFn0uV.png width=75% height=75%>
```
Positive:1girl, blurry, bracelet, breasts, dress, earrings, fingernails, grey_eyes, jewelry, lips, lipstick, looking_at_viewer, makeup, nail_polish, necklace, petals, red_lips, short_hair, solo, white_hair
Negative:lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls))
Steps: 20, Sampler: DPM++ 2S a, CFG scale: 8, Seed: 3149099819, Size: 704x896
```
<img src=https://imgur.com/tnGOZz8.png width=75% height=75%>
Img2img results:
```
Positive:1girl, anal_hair, black_pubic_hair, blurry, blurry_background, brown_eyes, colored_pubic_hair, excessive_pubic_hair, female_pubic_hair, forehead, grass, lips, looking_at_viewer, male_pubic_hair, mismatched_pubic_hair, pov, pubic_hair, realistic, solo, stray_pubic_hair, teeth
Negative:lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls))
Steps: 35, Sampler: Euler a, CFG scale: 9, Seed: 2148680457, Size: 512x512, Denoising strength: 0.6, Mask blur: 4
```
<img src=https://imgur.com/RVl7Xxd.png width=75% height=75%>
## Disclaimer
If you get anime images not semi realistic ones try some prompts like semi realistic,
realistic or (SemiRealImg). Usually helps. This model also works nicely with
landscapes like my previous one. However I recommend my other anime model for landscapes.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
Davlan/bloom-560m_am_sft_10000samples
|
Davlan
| 2022-11-18T15:43:47Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-11-18T14:26:55Z |
---
license: bigscience-openrail-m
---
|
Davlan/bloom-560m_am_continual-pretrain_10000samples
|
Davlan
| 2022-11-18T15:37:46Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"license:bigscience-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-18T14:06:34Z |
---
license: bigscience-openrail-m
---
|
Davlan/bloom-560m_am_madx_10000samples
|
Davlan
| 2022-11-18T14:44:59Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-11-18T14:26:38Z |
---
license: bigscience-openrail-m
---
|
Lukejd83/dndGenerator
|
Lukejd83
| 2022-11-18T13:54:36Z | 0 | 3 | null |
[
"license:odc-by",
"region:us"
] | null | 2022-11-18T04:53:58Z |
---
license: odc-by
---
Basically, generate the images by saying "dnd[RACE] person" I know some arent people, but it's what I've got to work with. ;)
Make sure there are no spaces, or punctuation in the "dnd[RACE HERE]" section, so "a portrait of dndYuanTi person, intricate, elegant, highly detailed, digital painting, artstation, trending, Volumetric lighting"
Here is a list of all of them (Autognome is VERY undertrained...):
* dndAarakocra
* dndAasimar
* dndAirGenasi
* dndAstralElf
* dndAutognome
* dndBugbear
* dndCentaur
* dndChangeling
* dndDeepGnome
* dndDragonborn
* dndDwarf
* dndEarthGenasi
* dndEladrin
* dndElf
* dndFairy
* dndFirbolg
* dndFireGenasi
* dndGenasi
* dndGiff
* dndGith
* dndGnome
* dndGoblin
* dndGoliath
* dndGrung
* dndHadozee
* dndHalfElf
* dndHalfling
* dndHalfOrc
* dndHarengon
* dndHobgoblin
* dndHuman
* dndKalashtar
* dndKenku
* dndKobold
* dndLeonin
* dndLizardfolk
* dndLocathah
* dndLoxodon
* dndMinotaur
* dndOrc
* dndOwlin
* dndPlasmoid
* dndRebornLineage
* dndSatyr
* dndSeaElf
* dndShadarKai
* dndShifter
* dndSimicHybrid
* dndTabaxi
* dndThriKreen
* dndTiefling
* dndTortle
* dndTriton
* dndVedalken
* dndVerdan
* dndWarforged
* dndWaterGenasi
* dndYuanTi
|
cyburn/lego_set
|
cyburn
| 2022-11-18T13:44:33Z | 0 | 2 | null |
[
"license:unknown",
"region:us"
] | null | 2022-11-17T18:33:12Z |
---
license: unknown
---
# Lego Set finetuned style Model
Produced from publicly available pictures in landscape, portrait and square format.
## Model info
The models included was trained on "multi-resolution" images of "Lego Sets"
## Using the model
* common subject prompt tokens: `lego set <wathever>`
## Example prompts
`mcdonald restaurant lego set`:
<img src="https://huggingface.co/cyburn/lego_set/resolve/main/1.jpg" alt="Picture." width="500"/>
`lego set crow, skull`:
<img src="https://huggingface.co/cyburn/lego_set/resolve/main/2.jpg" alt="Picture." width="500"/>
## img2img example
`lego set ottawa parliament building sharp focus`:
<img src="https://huggingface.co/cyburn/lego_set/resolve/main/3.jpg" alt="Picture." width="500"/>
|
Madiator2011/Lyoko-Diffusion-v1.1
|
Madiator2011
| 2022-11-18T13:00:15Z | 36 | 6 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-10-30T14:52:25Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: false
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. If possible do not use this model for comercial stuff and if you want to at least give some credtis :)
By clicking on "Access repository" below, you accept that your *contact information* (email address and username) can be shared with the model authors as well.
extra_gated_fields:
I have read the License and agree with its terms: checkbox
---
# Lyoko Diffusion v1-1 Model Card

This model is allowing users to generate images into styles from TV show Code Lyoko both 2D/CGI format.
To switch between styles you need to add it to prompt: for CGI ```CGILyoko style style``` for 2D ```2DLyoko style style```
If you want to support my future projects you can do it via https://ko-fi.com/madiator2011
Or by using my model on runpod with my reflink https://runpod.io?ref=vfker49t
This model has been trained thanks to support of Runpod.io team.
### Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "Madiator2011/Lyoko-Diffusion-v1.1"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16")
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion)
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
|
sukantan/wav2vec2-large-xls-r-300m-or-colab
|
sukantan
| 2022-11-18T12:58:33Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-03T11:58:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-or-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-or-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9276
- Wer: 1.1042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.764 | 24.97 | 400 | 0.9276 | 1.1042 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
stephenhbarlow/biobert-base-cased-v1.2-finetuned-PET
|
stephenhbarlow
| 2022-11-18T12:22:17Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-17T16:58:59Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: biobert-base-cased-v1.2-finetuned-PET
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2-finetuned-PET
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1756
- Accuracy: 0.9393
- F1: 0.9244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3531 | 1.0 | 16 | 0.1964 | 0.9252 | 0.8995 |
| 0.3187 | 2.0 | 32 | 0.1756 | 0.9393 | 0.9244 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.14.0.dev20221117
- Datasets 2.5.2
- Tokenizers 0.13.1
|
oskarandrsson/mt-lt-sv-finetuned
|
oskarandrsson
| 2022-11-18T11:36:42Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"translation",
"lt",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-11-16T08:27:36Z |
---
license: apache-2.0
language:
- lt
- sv
tags:
- generated_from_trainer
- translation
metrics:
- bleu
model-index:
- name: mt-lt-sv-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt-lt-sv-finetuned
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-lt-sv](https://huggingface.co/Helsinki-NLP/opus-mt-lt-sv) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1276
- Bleu: 43.0025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 24
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 1.3499 | 1.0 | 4409 | 1.2304 | 40.3211 |
| 1.2442 | 2.0 | 8818 | 1.1870 | 41.4633 |
| 1.1875 | 3.0 | 13227 | 1.1652 | 41.9164 |
| 1.1386 | 4.0 | 17636 | 1.1523 | 42.3534 |
| 1.0949 | 5.0 | 22045 | 1.1423 | 42.6339 |
| 1.0739 | 6.0 | 26454 | 1.1373 | 42.7617 |
| 1.0402 | 7.0 | 30863 | 1.1324 | 42.8568 |
| 1.0369 | 8.0 | 35272 | 1.1298 | 42.9608 |
| 1.0138 | 9.0 | 39681 | 1.1281 | 42.9833 |
| 1.0192 | 10.0 | 44090 | 1.1276 | 43.0025 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
oskarandrsson/mt-uk-sv-finetuned
|
oskarandrsson
| 2022-11-18T11:36:18Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"translation",
"uk",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-11-16T13:48:10Z |
---
license: apache-2.0
language:
- uk
- sv
tags:
- generated_from_trainer
- translation
model-index:
- name: mt-uk-sv-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt-uk-sv-finetuned
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-uk-sv](https://huggingface.co/Helsinki-NLP/opus-mt-uk-sv) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.4210
- eval_bleu: 40.6634
- eval_runtime: 966.5303
- eval_samples_per_second: 18.744
- eval_steps_per_second: 4.687
- epoch: 6.0
- step: 40764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 24
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
oskarandrsson/mt-ru-sv-finetuned
|
oskarandrsson
| 2022-11-18T11:35:38Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"translation",
"ru",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-11-18T09:31:15Z |
---
license: apache-2.0
language:
- ru
- sv
tags:
- generated_from_trainer
- translation
model-index:
- name: mt-ru-sv-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt-ru-sv-finetuned
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ru-sv](https://huggingface.co/Helsinki-NLP/opus-mt-ru-sv) on the None dataset.
It achieves the following results on the Tatoeba.rus.swe evaluation set:
- eval_loss: 0.6998
- eval_bleu: 54.4473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 24
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
oskarandrsson/mt-bs-sv-finetuned
|
oskarandrsson
| 2022-11-18T11:35:05Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"bs",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-11-16T16:57:47Z |
---
license: apache-2.0
language:
- bs
- sv
tags:
- translation
model-index:
- name: mt-bs-sv-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt-bs-sv-finetuned
This model is a fine-tuned version of [oskarandrsson/mt-hr-sv-finetuned](https://huggingface.co/oskarandrsson/mt-hr-sv-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.8217
- eval_bleu: 53.9611
- eval_runtime: 601.8995
- eval_samples_per_second: 15.971
- eval_steps_per_second: 3.994
- epoch: 4.0
- step: 14420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 24
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
zhiguoxu/bert-base-chinese-finetuned-ner
|
zhiguoxu
| 2022-11-18T11:09:59Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-07T16:37:59Z |
---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-chinese-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-ner
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0063
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0712 | 1.0 | 3 | 1.6814 | 0.0472 |
| 1.545 | 2.0 | 6 | 1.1195 | 0.4993 |
| 1.1234 | 3.0 | 9 | 0.7210 | 0.7259 |
| 0.6518 | 4.0 | 12 | 0.4457 | 0.8595 |
| 0.497 | 5.0 | 15 | 0.2754 | 0.9050 |
| 0.2761 | 6.0 | 18 | 0.1742 | 0.9509 |
| 0.2281 | 7.0 | 21 | 0.1053 | 0.9903 |
| 0.1189 | 8.0 | 24 | 0.0642 | 0.9976 |
| 0.1002 | 9.0 | 27 | 0.0416 | 1.0 |
| 0.053 | 10.0 | 30 | 0.0280 | 1.0 |
| 0.0525 | 11.0 | 33 | 0.0206 | 1.0 |
| 0.0412 | 12.0 | 36 | 0.0156 | 1.0 |
| 0.0284 | 13.0 | 39 | 0.0123 | 1.0 |
| 0.0191 | 14.0 | 42 | 0.0101 | 1.0 |
| 0.0227 | 15.0 | 45 | 0.0087 | 1.0 |
| 0.0167 | 16.0 | 48 | 0.0077 | 1.0 |
| 0.0161 | 17.0 | 51 | 0.0071 | 1.0 |
| 0.015 | 18.0 | 54 | 0.0066 | 1.0 |
| 0.0167 | 19.0 | 57 | 0.0064 | 1.0 |
| 0.0121 | 20.0 | 60 | 0.0063 | 1.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0+cu102
- Datasets 1.18.4
- Tokenizers 0.12.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.