modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
lmqg/bart-large-squadshifts-vanilla-new_wiki | b07116d5b752c04cf4074bc2f97d77d06ee3973b | 2022-06-22T10:53:40.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-squadshifts-vanilla-new_wiki | 0 | null | transformers | 38,300 | Entry not found |
fujiki/gpt2-small-en2ja | 8a405780c79d5aff715cdc7ef8e11fd0f2da2ad2 | 2022-06-22T01:33:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | fujiki | null | fujiki/gpt2-small-en2ja | 0 | null | transformers | 38,301 | Entry not found |
yourusername/push-to-hub-68284633-43ff-45ca-9300-ea115e5ed1ff | c6df1d4c48a0a2796623ea3d9c3d4b7b9ea6fc6e | 2022-06-22T01:31:08.000Z | [
"en",
"dataset:glue",
"pytorch",
"text-classification",
"license:mit"
] | text-classification | false | yourusername | null | yourusername/push-to-hub-68284633-43ff-45ca-9300-ea115e5ed1ff | 0 | null | pytorch | 38,302 | ---
language: en
license: mit
library_name: pytorch
tags: text-classification
datasets: glue
metrics: acc
---
# MyModelName
asdf |
sasuke/mt5-small-finetuned-amazon-en-es | 282f2e830e4068824d53e986ccbe6ecd7efd60c0 | 2022-06-22T02:14:17.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sasuke | null | sasuke/mt5-small-finetuned-amazon-en-es | 0 | null | transformers | 38,303 | Entry not found |
lmqg/bart-large-squadshifts-nyt | 1c241383dded121e8c99a99ba97c8d81015fb305 | 2022-06-22T10:47:16.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-squadshifts-nyt | 0 | null | transformers | 38,304 | Entry not found |
gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53 | 681fc9d832026e8a3adfff07c8a0e6f917088fbf | 2022-06-23T02:23:51.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53 | 0 | null | transformers | 38,305 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the GARY109/AI_LIGHT_DANCE - ONSET-STEPMANIA2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2034
- Wer: 0.9875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.5631 | 1.0 | 150 | 2.4894 | 1.0 |
| 1.9443 | 2.0 | 300 | 1.8861 | 1.0 |
| 1.7618 | 3.0 | 450 | 1.6731 | 1.0 |
| 1.2354 | 4.0 | 600 | 1.2471 | 0.9875 |
| 1.2333 | 5.0 | 750 | 1.2253 | 0.9875 |
| 1.2037 | 6.0 | 900 | 1.2168 | 0.9875 |
| 1.2184 | 7.0 | 1050 | 1.2120 | 0.9875 |
| 1.1932 | 8.0 | 1200 | 1.2080 | 0.9875 |
| 1.179 | 9.0 | 1350 | 1.2039 | 0.9875 |
| 1.1722 | 10.0 | 1500 | 1.2034 | 0.9875 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
lmqg/bart-large-squadshifts-reddit | 7571a8bdabeaf6cb4839d11d2d941b6bded62e73 | 2022-06-22T10:49:38.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-squadshifts-reddit | 0 | null | transformers | 38,306 | Entry not found |
micrem73/GePpeTto-finetuned-ricettetrentine | cda507fceaf042df467a9178db929444f7e74f8c | 2022-06-22T09:03:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | micrem73 | null | micrem73/GePpeTto-finetuned-ricettetrentine | 0 | null | transformers | 38,307 | Entry not found |
lmqg/bart-large-squadshifts-amazon | f761c204a46e442ae4bdf65d0d93d3908214cead | 2022-06-22T10:51:41.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-squadshifts-amazon | 0 | null | transformers | 38,308 | Entry not found |
lmqg/bart-large-subjqa-vanilla-books | a7e46d232070ad9e88d2185ca9a5d607c1ec40ea | 2022-06-22T10:53:07.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-subjqa-vanilla-books | 0 | null | transformers | 38,309 | Entry not found |
lmqg/bart-base-squadshifts-new_wiki | c172eeff882a21eb283938110238f2a928a79233 | 2022-06-22T10:45:58.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-squadshifts-new_wiki | 0 | null | transformers | 38,310 | Entry not found |
lmqg/bart-base-squadshifts-vanilla-new_wiki | fcbe89cd2bb711041560aec337bbe0602b2c1203 | 2022-06-22T10:45:51.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-squadshifts-vanilla-new_wiki | 0 | null | transformers | 38,311 | Entry not found |
lmqg/bart-base-squadshifts-vanilla-nyt | 6912ba9a0a0210876d2b734a4095e11b7d9396a2 | 2022-06-22T10:47:23.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-squadshifts-vanilla-nyt | 0 | null | transformers | 38,312 | Entry not found |
lmqg/bart-base-squadshifts-nyt | a5ab514acae1e6c00b1bb8615c9eace1c50c0940 | 2022-06-22T10:47:47.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-squadshifts-nyt | 0 | null | transformers | 38,313 | Entry not found |
lmqg/bart-base-subjqa-vanilla-electronics | 95ab8d30df2eac686df757730b76acc13f30a5ee | 2022-06-22T10:47:34.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-subjqa-vanilla-electronics | 0 | null | transformers | 38,314 | Entry not found |
lmqg/bart-base-subjqa-vanilla-grocery | 55c826b5c9fd69838e8d586a26128adf54af2886 | 2022-06-22T10:48:54.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-subjqa-vanilla-grocery | 0 | null | transformers | 38,315 | Entry not found |
lmqg/bart-base-subjqa-vanilla-books | b142a5127e9c0c184d9c8b513bad361fd979754e | 2022-06-22T10:45:41.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-subjqa-vanilla-books | 0 | null | transformers | 38,316 | Entry not found |
lmqg/bart-base-squadshifts-vanilla-amazon | 02639c630fd9803d614fd7142b6e6799eb0771cc | 2022-06-22T10:50:40.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-squadshifts-vanilla-amazon | 0 | null | transformers | 38,317 | Entry not found |
lmqg/bart-base-squadshifts-amazon | bd1721caf7ee77156eff1156e9b9c2ecf63ed944 | 2022-06-22T10:50:48.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-squadshifts-amazon | 0 | null | transformers | 38,318 | Entry not found |
lmqg/bart-base-subjqa-vanilla-restaurants | 51ce87307fd60080b5fef0b34e92371313d7d782 | 2022-06-22T10:52:33.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-subjqa-vanilla-restaurants | 0 | null | transformers | 38,319 | Entry not found |
lmqg/bart-base-subjqa-vanilla-tripadvisor | f8e1e4b21dafb4ab28c2f8c0cf36537f0559bd25 | 2022-06-22T10:54:09.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-base-subjqa-vanilla-tripadvisor | 0 | null | transformers | 38,320 | Entry not found |
lmqg/bart-large-squadshifts-vanilla-amazon | 74def6dd9073a66ceef979b4ed5dc812f94ca1e2 | 2022-06-22T11:00:01.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-squadshifts-vanilla-amazon | 0 | null | transformers | 38,321 | Entry not found |
lmqg/bart-large-subjqa-vanilla-grocery | 32be88d309c7daf3425f6aa9738cfde3b23238ca | 2022-06-22T11:29:47.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-subjqa-vanilla-grocery | 0 | null | transformers | 38,322 | Entry not found |
lmqg/bart-large-subjqa-vanilla-restaurants | 7ee632005f292e9fed2c6c5ba2c34f2dbcd0b1a0 | 2022-06-22T12:06:10.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-subjqa-vanilla-restaurants | 0 | null | transformers | 38,323 | Entry not found |
lmqg/bart-large-subjqa-vanilla-tripadvisor | f6334ab0b0b928574d0d4327f61031c8038c4935 | 2022-06-22T12:26:21.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-subjqa-vanilla-tripadvisor | 0 | null | transformers | 38,324 | Entry not found |
lokesh-csengineer/distilbert-base-uncased-finetuned-imdb | cbbe8a8963c55918ca35ebec44d04db3f88601f6 | 2022-06-22T13:05:07.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | lokesh-csengineer | null | lokesh-csengineer/distilbert-base-uncased-finetuned-imdb | 0 | null | transformers | 38,325 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4099
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7277 | 1.0 | 157 | 2.5120 |
| 2.5953 | 2.0 | 314 | 2.4296 |
| 2.5547 | 3.0 | 471 | 2.4218 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
paola-md/recipe-is | 67014cfc5487cb81767857c761ea52e63a12f903 | 2022-06-22T13:02:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | paola-md | null | paola-md/recipe-is | 0 | null | transformers | 38,326 | Entry not found |
paola-md/recipe-clean_steps | 6e5ee526394a960b2e7948e38bf4ac71bc230b86 | 2022-06-22T13:05:23.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | paola-md | null | paola-md/recipe-clean_steps | 0 | null | transformers | 38,327 | Entry not found |
luantber/k_cnn_cifar10 | 7a248cfe7498482d32e99975e40620b156f1e8f5 | 2022-06-22T19:35:53.000Z | [
"pytorch",
"image-classification"
] | image-classification | false | luantber | null | luantber/k_cnn_cifar10 | 0 | null | pytorch | 38,328 | ---
library_name: pytorch
tags:
- image-classification
---
# CNN |
jamesmarcel/xlm-roberta-base-finetuned-panx-de | d12c774a96b23f2fbd25cc217224f3d4824c6a04 | 2022-06-22T17:26:24.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | jamesmarcel | null | jamesmarcel/xlm-roberta-base-finetuned-panx-de | 0 | null | transformers | 38,329 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8620945214069894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
sexomq/TeoBot-Romanian-medium | 421a9bfabea325e010b2243a8eb5fbade0d2eeaa | 2022-06-24T20:04:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | sexomq | null | sexomq/TeoBot-Romanian-medium | 0 | null | transformers | 38,330 | ---
tags:
- conversational
--- |
namura/vit-demo | 24d37b5f586d24143bef345fc1cd93a8df1d286b | 2022-06-22T22:26:22.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers"
] | image-classification | false | namura | null | namura/vit-demo | 0 | null | transformers | 38,331 | Entry not found |
sonalily/distilgpt2-finetuned-wikitext2 | d99a125d18cbbb1158e5ec567a6ad165a9a0bc0b | 2022-06-24T04:14:20.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | sonalily | null | sonalily/distilgpt2-finetuned-wikitext2 | 0 | null | transformers | 38,332 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7607 | 1.0 | 2334 | 3.6664 |
| 3.6527 | 2.0 | 4668 | 3.6473 |
| 3.6015 | 3.0 | 7002 | 3.6429 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gary109/ai-light-dance_chord_ft_wav2vec2-large-xlsr-53 | ce2d058973c167abfea60cfa8e131e096427bb64 | 2022-06-25T09:19:30.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_chord_ft_wav2vec2-large-xlsr-53 | 0 | null | transformers | 38,333 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_chord_ft_wav2vec2-large-xlsr-53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_chord_ft_wav2vec2-large-xlsr-53
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the GARY109/AI_LIGHT_DANCE - ONSET-CHORD2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8722
- Wer: 0.9590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1857 | 1.0 | 126 | 4.5913 | 1.0 |
| 3.0939 | 2.0 | 252 | 3.0160 | 1.0 |
| 2.8403 | 3.0 | 378 | 2.7337 | 1.0 |
| 2.2525 | 4.0 | 504 | 2.5588 | 0.9825 |
| 2.0291 | 5.0 | 630 | 2.5216 | 0.9701 |
| 1.9083 | 6.0 | 756 | 2.3990 | 0.9514 |
| 1.8745 | 7.0 | 882 | 2.2781 | 0.9474 |
| 1.8222 | 8.0 | 1008 | 2.2360 | 0.9471 |
| 1.7871 | 9.0 | 1134 | 2.1960 | 0.9463 |
| 1.7225 | 10.0 | 1260 | 2.0775 | 0.9464 |
| 1.6856 | 11.0 | 1386 | 2.0817 | 0.9518 |
| 1.6903 | 12.0 | 1512 | 2.0607 | 0.9534 |
| 1.6034 | 13.0 | 1638 | 1.9956 | 0.9504 |
| 1.6171 | 14.0 | 1764 | 2.0099 | 0.9490 |
| 1.5508 | 15.0 | 1890 | 2.0424 | 0.9591 |
| 1.539 | 16.0 | 2016 | 1.9728 | 0.9600 |
| 1.5176 | 17.0 | 2142 | 2.0421 | 0.9628 |
| 1.5088 | 18.0 | 2268 | 1.9428 | 0.9598 |
| 1.4739 | 19.0 | 2394 | 1.9886 | 0.9591 |
| 1.4228 | 20.0 | 2520 | 2.0164 | 0.9670 |
| 1.4277 | 21.0 | 2646 | 1.9968 | 0.9704 |
| 1.3834 | 22.0 | 2772 | 1.9882 | 0.9669 |
| 1.3768 | 23.0 | 2898 | 1.9519 | 0.9606 |
| 1.3747 | 24.0 | 3024 | 1.8923 | 0.9580 |
| 1.3533 | 25.0 | 3150 | 1.9767 | 0.9707 |
| 1.3312 | 26.0 | 3276 | 1.8993 | 0.9609 |
| 1.2743 | 27.0 | 3402 | 1.9494 | 0.9705 |
| 1.2924 | 28.0 | 3528 | 1.9019 | 0.9631 |
| 1.2621 | 29.0 | 3654 | 1.9110 | 0.9596 |
| 1.2387 | 30.0 | 3780 | 1.9118 | 0.9627 |
| 1.228 | 31.0 | 3906 | 1.8722 | 0.9590 |
| 1.1938 | 32.0 | 4032 | 1.8890 | 0.9599 |
| 1.1887 | 33.0 | 4158 | 1.9175 | 0.9653 |
| 1.1807 | 34.0 | 4284 | 1.8983 | 0.9649 |
| 1.1553 | 35.0 | 4410 | 1.9246 | 0.9703 |
| 1.1448 | 36.0 | 4536 | 1.9248 | 0.9705 |
| 1.1146 | 37.0 | 4662 | 1.9747 | 0.9804 |
| 1.1394 | 38.0 | 4788 | 1.9119 | 0.9723 |
| 1.1206 | 39.0 | 4914 | 1.8931 | 0.9630 |
| 1.0892 | 40.0 | 5040 | 1.9243 | 0.9668 |
| 1.104 | 41.0 | 5166 | 1.8965 | 0.9671 |
| 1.054 | 42.0 | 5292 | 1.9477 | 0.9755 |
| 1.0922 | 43.0 | 5418 | 1.8969 | 0.9699 |
| 1.0484 | 44.0 | 5544 | 1.9423 | 0.9733 |
| 1.0567 | 45.0 | 5670 | 1.9412 | 0.9745 |
| 1.0615 | 46.0 | 5796 | 1.9076 | 0.9674 |
| 1.0201 | 47.0 | 5922 | 1.9384 | 0.9743 |
| 1.0664 | 48.0 | 6048 | 1.9509 | 0.9816 |
| 1.0498 | 49.0 | 6174 | 1.9426 | 0.9757 |
| 1.0303 | 50.0 | 6300 | 1.9477 | 0.9781 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
sharpcoder/wav2vec2_bjorn | d8ed61ac0fea734bcbdcabb0e179fc1746d0ebd4 | 2022-06-24T04:24:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | sharpcoder | null | sharpcoder/wav2vec2_bjorn | 0 | null | transformers | 38,334 | This project is meant to fine-tune the facebook/wav2vec2 speech-to-text library using my voice specifically for my own speech to text purposes. |
BlinkDL/rwkv-2-pile-430m | 2a465448b30fd1cbc50e0413cbe2c82e822d2413 | 2022-07-20T01:50:22.000Z | [
"en",
"dataset:The Pile",
"pytorch",
"text-generation",
"causal-lm",
"rwkv",
"license:bsd-2-clause"
] | text-generation | false | BlinkDL | null | BlinkDL/rwkv-2-pile-430m | 0 | 2 | null | 38,335 | ---
language:
- en
tags:
- pytorch
- text-generation
- causal-lm
- rwkv
license: bsd-2-clause
datasets:
- The Pile
---
# RWKV-2 430M
## Model Description
RWKV-2 430M is a L24-D1024 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
At this moment you have to use my Github code (https://github.com/BlinkDL/RWKV-v2-RNN-Pile) to run it.
ctx_len = 768 n_layer = 24 n_embd = 1024
Final checkpoint: 20220615-10803.pth : Trained on the Pile for 331B tokens.
* Pile loss 2.349
* LAMBADA ppl 15.34, acc 42.42%
* PIQA acc 67.03%
* SC2016 acc 62.05%
* Hellaswag acc_norm 38.47% |
gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53 | 7d1b033952fd52108c760213b92124732e69d8a9 | 2022-06-24T09:28:26.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"/workspace/asante/ai-light-dance_datasets/AI_Light_Dance.py",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53 | 0 | null | transformers | 38,336 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- /workspace/asante/ai-light-dance_datasets/AI_Light_Dance.py
- generated_from_trainer
model-index:
- name: ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the /WORKSPACE/ASANTE/AI-LIGHT-DANCE_DATASETS/AI_LIGHT_DANCE.PY - ONSET-SINGING2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7583
- Wer: 0.9386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 27.4755 | 1.0 | 112 | 23.2618 | 1.0 |
| 5.5145 | 2.0 | 224 | 5.2213 | 1.0 |
| 4.2211 | 3.0 | 336 | 4.1673 | 1.0 |
| 3.8386 | 4.0 | 448 | 3.8253 | 1.0 |
| 3.5531 | 5.0 | 560 | 3.6286 | 1.0 |
| 3.5215 | 6.0 | 672 | 3.4762 | 0.9864 |
| 3.3493 | 7.0 | 784 | 3.3549 | 0.9847 |
| 3.1264 | 8.0 | 896 | 3.1797 | 0.9759 |
| 2.7557 | 9.0 | 1008 | 2.8703 | 0.9865 |
| 2.6345 | 10.0 | 1120 | 2.6736 | 0.9970 |
| 2.4297 | 11.0 | 1232 | 2.5638 | 1.0337 |
| 2.3057 | 12.0 | 1344 | 2.3680 | 0.9839 |
| 2.1436 | 13.0 | 1456 | 2.2367 | 0.9648 |
| 2.0856 | 14.0 | 1568 | 2.1635 | 0.9586 |
| 2.0035 | 15.0 | 1680 | 2.0945 | 0.9645 |
| 1.9134 | 16.0 | 1792 | 2.0395 | 0.9630 |
| 1.9443 | 17.0 | 1904 | 2.0017 | 0.9401 |
| 1.8988 | 18.0 | 2016 | 1.9514 | 0.9493 |
| 1.8141 | 19.0 | 2128 | 1.9111 | 0.9475 |
| 1.8344 | 20.0 | 2240 | 1.8790 | 0.9395 |
| 1.7775 | 21.0 | 2352 | 1.8616 | 0.9503 |
| 1.7517 | 22.0 | 2464 | 1.8333 | 0.9433 |
| 1.7037 | 23.0 | 2576 | 1.8156 | 0.9372 |
| 1.7158 | 24.0 | 2688 | 1.7961 | 0.9482 |
| 1.7111 | 25.0 | 2800 | 1.7817 | 0.9422 |
| 1.69 | 26.0 | 2912 | 1.7819 | 0.9430 |
| 1.6889 | 27.0 | 3024 | 1.7721 | 0.9386 |
| 1.6546 | 28.0 | 3136 | 1.7647 | 0.9453 |
| 1.6542 | 29.0 | 3248 | 1.7653 | 0.9375 |
| 1.647 | 30.0 | 3360 | 1.7583 | 0.9386 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
slh/fcnet-base-cased | 0042cceba141019f5907cc7e3366075dbff64eff | 2022-06-23T06:13:53.000Z | [
"pytorch",
"fcnet",
"transformers"
] | null | false | slh | null | slh/fcnet-base-cased | 0 | null | transformers | 38,337 | Entry not found |
rhr99/wav2vec2-large-xls-r-300m-bn-colab | 93183154a40f2218a71e293485f958df33435cf8 | 2022-06-23T09:56:46.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice_9_0",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | rhr99 | null | rhr99/wav2vec2-large-xls-r-300m-bn-colab | 0 | null | transformers | 38,338 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_9_0
model-index:
- name: wav2vec2-large-xls-r-300m-bn-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bn-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_9_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4662
- Wer: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.278 | 0.88 | 400 | 2.1963 | 1.0 |
| 0.8479 | 1.77 | 800 | 0.4662 | 0.9861 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v1 | 215261eea78efa5e4c4bfb3048c6139abff4fbc5 | 2022-06-24T05:43:24.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v1 | 0 | null | transformers | 38,339 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v1
This model is a fine-tuned version of [gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53](https://huggingface.co/gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53) on the GARY109/AI_LIGHT_DANCE - ONSET-STEPMANIA2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0763
- Wer: 0.7344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1632 | 1.0 | 150 | 1.2007 | 0.9875 |
| 1.1615 | 2.0 | 300 | 1.1912 | 0.9875 |
| 1.1487 | 3.0 | 450 | 1.1942 | 0.9875 |
| 1.1207 | 4.0 | 600 | 1.1753 | 0.9875 |
| 1.0638 | 5.0 | 750 | 1.1345 | 0.8214 |
| 1.0174 | 6.0 | 900 | 1.1541 | 0.7665 |
| 0.9946 | 7.0 | 1050 | 1.0799 | 0.7716 |
| 0.9694 | 8.0 | 1200 | 1.0848 | 0.7418 |
| 0.9566 | 9.0 | 1350 | 1.0763 | 0.7344 |
| 0.9466 | 10.0 | 1500 | 1.0791 | 0.7240 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
jgriffi/pegasus-samsum | 652ca8d7a890f9bcccb1c8878dd1a2f31e78ab7a | 2022-06-23T11:18:59.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | jgriffi | null | jgriffi/pegasus-samsum | 0 | null | transformers | 38,340 | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7073 | 0.54 | 500 | 1.4841 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
BlinkDL/rwkv-3-pile-1b5 | 633087c48ba816efbcfe3a849affb17704223b00 | 2022-07-28T08:38:10.000Z | [
"en",
"dataset:The Pile",
"pytorch",
"text-generation",
"causal-lm",
"rwkv",
"license:bsd-2-clause"
] | text-generation | false | BlinkDL | null | BlinkDL/rwkv-3-pile-1b5 | 0 | 5 | null | 38,341 | ---
language:
- en
tags:
- pytorch
- text-generation
- causal-lm
- rwkv
license: bsd-2-clause
datasets:
- The Pile
---
# RWKV-3 1.5B
## Model Description
RWKV-3 1.5B is a L24-D2048 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
At this moment you have to use my Github code (https://github.com/BlinkDL/RWKV-v2-RNN-Pile) to run it.
ctx_len = 896
n_layer = 24
n_embd = 2048
Preview checkpoint: RWKV-3-Pile-20220723-3542.pth : Trained on the Pile for 127B tokens.
* Pile loss 2.102
* LAMBADA ppl 7.52, acc 54.71%
* PIQA acc 71.11%
* SC2016 acc 67.24%
* Hellaswag acc_norm 50.45%
Preview checkpoint: 20220708-1905.pth : Trained on the Pile for 68B tokens.
* Pile loss 2.148
* LAMBADA ppl 8.41, acc 53.17%
* PIQA acc 69.64%
* SC2016 acc 67.08%
* Hellaswag acc_norm 48.20%
(I am still training it) |
tali/wav2vec2-large-xlsr-turkish-demo-colab | a1bffb86a9e8d149e1de4107a35f84d5075b618d | 2022-06-27T11:44:33.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | tali | null | tali/wav2vec2-large-xlsr-turkish-demo-colab | 0 | null | transformers | 38,342 | Entry not found |
ryo0634/bert-base-random-encoder-en-0 | 51defa23d45209a99ef2bcdfb43423d3c2194939 | 2022-06-23T12:28:32.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ryo0634 | null | ryo0634/bert-base-random-encoder-en-0 | 0 | null | transformers | 38,343 | Entry not found |
mayoughi/where_am_I_hospital-balcony-hallway-airport-coffee-house-apartment-office | 15c936dd3ee71efaa9bdc643ae51650e93c86773 | 2022-06-23T16:28:19.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | mayoughi | null | mayoughi/where_am_I_hospital-balcony-hallway-airport-coffee-house-apartment-office | 0 | null | transformers | 38,344 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: where_am_I_hospital-balcony-hallway-airport-coffee-house-apartment-office
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7555555701255798
---
# where_am_I_hospital-balcony-hallway-airport-coffee-house-apartment-office
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### airport
![airport](images/airport.jpg)
#### balcony
![balcony](images/balcony.jpg)
#### hallway
![hallway](images/hallway.jpg)
#### hospital
![hospital](images/hospital.jpg)
#### inside apartment
![inside apartment](images/inside_apartment.jpg)
#### inside coffee house
![inside coffee house](images/inside_coffee_house.jpg)
#### office
![office](images/office.jpg)
#### restaurant
![restaurant](images/restaurant.jpg) |
gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v2 | e45907bdf58a27a0f53a5aee9fcbb77c6c14450b | 2022-06-25T05:01:56.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v2 | 0 | null | transformers | 38,345 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v2
This model is a fine-tuned version of [gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v1](https://huggingface.co/gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v1) on the GARY109/AI_LIGHT_DANCE - ONSET-STEPMANIA2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0753
- Wer: 0.7017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.945 | 1.0 | 150 | 1.0767 | 0.7282 |
| 0.9445 | 2.0 | 300 | 1.0773 | 0.7165 |
| 0.9392 | 3.0 | 450 | 1.0813 | 0.7141 |
| 0.933 | 4.0 | 600 | 1.0858 | 0.7032 |
| 0.921 | 5.0 | 750 | 1.0753 | 0.7017 |
| 0.9241 | 6.0 | 900 | 1.0787 | 0.6976 |
| 0.9282 | 7.0 | 1050 | 1.0825 | 0.6959 |
| 0.9184 | 8.0 | 1200 | 1.0760 | 0.6930 |
| 0.915 | 9.0 | 1350 | 1.0773 | 0.6906 |
| 0.9094 | 10.0 | 1500 | 1.0786 | 0.6900 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
jackcarey/t5-small-finetuned-qgsquad-qgen | 20c8e3c70dd6d0d13180c790ae8b7da33fa62e68 | 2022-06-25T11:03:16.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jackcarey | null | jackcarey/t5-small-finetuned-qgsquad-qgen | 0 | null | transformers | 38,346 | Entry not found |
soProf1998/DialoGPT-small-chattyrick | 968f2a41c353dd579fe5658cb0e8dab39530f406 | 2022-06-24T08:22:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | soProf1998 | null | soProf1998/DialoGPT-small-chattyrick | 0 | 1 | transformers | 38,347 | ---
tags:
- conversational
---
# Chatty Rick DialoGBT Model |
mohsenfayyaz/bert-base-parsbert-uncased_pquad | a4439c8da66bfb05861591f0d4ef068852a1472a | 2022-06-24T08:43:28.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mohsenfayyaz | null | mohsenfayyaz/bert-base-parsbert-uncased_pquad | 0 | null | transformers | 38,348 | Entry not found |
mohsenfayyaz/bert-base-parsbert-uncased_persian_qa | 39f1452a5995a6e3fa4fcfd9d1f1e0ebf673da04 | 2022-06-24T09:08:59.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mohsenfayyaz | null | mohsenfayyaz/bert-base-parsbert-uncased_persian_qa | 0 | null | transformers | 38,349 | Entry not found |
mohsenfayyaz/bert-base-parsbert-uncased_pquad_and_persian_qa | 44229ce0b54b11094a5dfedc36b1c478ae7d3bac | 2022-06-24T10:27:20.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mohsenfayyaz | null | mohsenfayyaz/bert-base-parsbert-uncased_pquad_and_persian_qa | 0 | null | transformers | 38,350 | Entry not found |
mohsenfayyaz/albert-fa-base-v2_pquad | 7aeb2b3845d39ba31fe368dd881a1c1daa5a529f | 2022-06-24T10:51:35.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mohsenfayyaz | null | mohsenfayyaz/albert-fa-base-v2_pquad | 0 | null | transformers | 38,351 | Entry not found |
mohsenfayyaz/albert-fa-base-v2_persian_qa | 0fd36b6559b86e44f939819e62eb45b0d57760aa | 2022-06-24T11:11:31.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mohsenfayyaz | null | mohsenfayyaz/albert-fa-base-v2_persian_qa | 0 | null | transformers | 38,352 | Entry not found |
mohsenfayyaz/albert-fa-base-v2_parsquad | 179a53a2013e5ef64eb3cfd8f15d7575f1bb1b0f | 2022-06-24T11:47:10.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mohsenfayyaz | null | mohsenfayyaz/albert-fa-base-v2_parsquad | 0 | null | transformers | 38,353 | Entry not found |
robertodtg/wav2vec2-large-xls-r-300m-pt-colab | afeec0aa37c5661d4bb37782ac329b7015e7e395 | 2022-06-25T21:25:10.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice_9_0",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | robertodtg | null | robertodtg/wav2vec2-large-xls-r-300m-pt-colab | 0 | null | transformers | 38,354 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_9_0
model-index:
- name: wav2vec2-large-xls-r-300m-pt-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-pt-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_9_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2975
- Wer: 0.1736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.179 | 0.49 | 400 | 1.4554 | 0.9349 |
| 0.7545 | 0.98 | 800 | 0.5594 | 0.5174 |
| 0.4485 | 1.47 | 1200 | 0.3964 | 0.3749 |
| 0.4118 | 1.96 | 1600 | 0.3547 | 0.3172 |
| 0.3282 | 2.45 | 2000 | 0.3372 | 0.3061 |
| 0.3199 | 2.94 | 2400 | 0.3466 | 0.2910 |
| 0.2847 | 3.44 | 2800 | 0.3651 | 0.3310 |
| 0.2713 | 3.93 | 3200 | 0.3509 | 0.3016 |
| 0.2414 | 4.42 | 3600 | 0.3451 | 0.2908 |
| 0.2473 | 4.91 | 4000 | 0.3253 | 0.2747 |
| 0.2168 | 5.4 | 4400 | 0.3243 | 0.2680 |
| 0.219 | 5.89 | 4800 | 0.3067 | 0.2540 |
| 0.196 | 6.38 | 5200 | 0.3268 | 0.2824 |
| 0.1934 | 6.87 | 5600 | 0.3252 | 0.2736 |
| 0.1808 | 7.36 | 6000 | 0.3422 | 0.2737 |
| 0.177 | 7.85 | 6400 | 0.3292 | 0.2707 |
| 0.1626 | 8.34 | 6800 | 0.3089 | 0.2524 |
| 0.1605 | 8.83 | 7200 | 0.3062 | 0.2471 |
| 0.1505 | 9.32 | 7600 | 0.3229 | 0.2474 |
| 0.1491 | 9.82 | 8000 | 0.3098 | 0.2491 |
| 0.1433 | 10.31 | 8400 | 0.3449 | 0.2681 |
| 0.1431 | 10.8 | 8800 | 0.3439 | 0.2532 |
| 0.1349 | 11.29 | 9200 | 0.3112 | 0.2413 |
| 0.1236 | 11.78 | 9600 | 0.3248 | 0.2378 |
| 0.1253 | 12.27 | 10000 | 0.3393 | 0.2394 |
| 0.1195 | 12.76 | 10400 | 0.3050 | 0.2336 |
| 0.1194 | 13.25 | 10800 | 0.3494 | 0.2550 |
| 0.1125 | 13.74 | 11200 | 0.3332 | 0.2395 |
| 0.1063 | 14.23 | 11600 | 0.3134 | 0.2365 |
| 0.1044 | 14.72 | 12000 | 0.3101 | 0.2303 |
| 0.0999 | 15.21 | 12400 | 0.3162 | 0.2248 |
| 0.0986 | 15.71 | 12800 | 0.3183 | 0.2260 |
| 0.0958 | 16.2 | 13200 | 0.3300 | 0.2279 |
| 0.0907 | 16.69 | 13600 | 0.3136 | 0.2260 |
| 0.0875 | 17.18 | 14000 | 0.3492 | 0.2203 |
| 0.0823 | 17.67 | 14400 | 0.3214 | 0.2259 |
| 0.0839 | 18.16 | 14800 | 0.3194 | 0.2145 |
| 0.0783 | 18.65 | 15200 | 0.3122 | 0.2180 |
| 0.0789 | 19.14 | 15600 | 0.3158 | 0.2127 |
| 0.0732 | 19.63 | 16000 | 0.3076 | 0.2109 |
| 0.0715 | 20.12 | 16400 | 0.3216 | 0.2150 |
| 0.0649 | 20.61 | 16800 | 0.2958 | 0.2051 |
| 0.0647 | 21.1 | 17200 | 0.3022 | 0.2014 |
| 0.0649 | 21.59 | 17600 | 0.3045 | 0.2033 |
| 0.0621 | 22.09 | 18000 | 0.3194 | 0.2035 |
| 0.0561 | 22.58 | 18400 | 0.3197 | 0.2022 |
| 0.0582 | 23.07 | 18800 | 0.3109 | 0.1978 |
| 0.0533 | 23.56 | 19200 | 0.3121 | 0.1932 |
| 0.0515 | 24.05 | 19600 | 0.3125 | 0.1939 |
| 0.0484 | 24.54 | 20000 | 0.3081 | 0.1908 |
| 0.0485 | 25.03 | 20400 | 0.3042 | 0.1896 |
| 0.0444 | 25.52 | 20800 | 0.3038 | 0.1886 |
| 0.0426 | 26.01 | 21200 | 0.2985 | 0.1868 |
| 0.0415 | 26.5 | 21600 | 0.3066 | 0.1858 |
| 0.0398 | 26.99 | 22000 | 0.3117 | 0.1828 |
| 0.0397 | 27.48 | 22400 | 0.2980 | 0.1795 |
| 0.0394 | 27.97 | 22800 | 0.2950 | 0.1791 |
| 0.0364 | 28.47 | 23200 | 0.3025 | 0.1773 |
| 0.0365 | 28.96 | 23600 | 0.3022 | 0.1747 |
| 0.0376 | 29.45 | 24000 | 0.2978 | 0.1738 |
| 0.0344 | 29.94 | 24400 | 0.2975 | 0.1736 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-v1 | e9913fccfadf8bbde5411b2336e8cb60b90b8278 | 2022-06-26T02:32:15.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-v1 | 0 | null | transformers | 38,355 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-v1
This model is a fine-tuned version of [gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53](https://huggingface.co/gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5760
- Wer: 0.2905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.656 | 1.0 | 112 | 1.7625 | 0.9265 |
| 1.3693 | 2.0 | 224 | 1.5135 | 0.9243 |
| 1.2172 | 3.0 | 336 | 1.2657 | 0.8533 |
| 1.0456 | 4.0 | 448 | 1.0893 | 0.7691 |
| 0.9385 | 5.0 | 560 | 1.0110 | 0.7097 |
| 0.8165 | 6.0 | 672 | 0.9243 | 0.6682 |
| 0.7491 | 7.0 | 784 | 0.8948 | 0.6583 |
| 0.6772 | 8.0 | 896 | 0.7894 | 0.6007 |
| 0.6096 | 9.0 | 1008 | 0.7684 | 0.5663 |
| 0.5714 | 10.0 | 1120 | 0.6978 | 0.4826 |
| 0.5213 | 11.0 | 1232 | 0.8433 | 0.4927 |
| 0.4624 | 12.0 | 1344 | 0.6695 | 0.4469 |
| 0.4298 | 13.0 | 1456 | 0.6569 | 0.3868 |
| 0.3939 | 14.0 | 1568 | 0.6633 | 0.3694 |
| 0.3803 | 15.0 | 1680 | 0.6376 | 0.3920 |
| 0.3415 | 16.0 | 1792 | 0.6463 | 0.3414 |
| 0.3239 | 17.0 | 1904 | 0.5841 | 0.3197 |
| 0.2946 | 18.0 | 2016 | 0.5948 | 0.3112 |
| 0.2751 | 19.0 | 2128 | 0.5760 | 0.2905 |
| 0.2834 | 20.0 | 2240 | 0.5884 | 0.2975 |
| 0.2383 | 21.0 | 2352 | 0.5989 | 0.2775 |
| 0.2265 | 22.0 | 2464 | 0.6151 | 0.2853 |
| 0.2158 | 23.0 | 2576 | 0.5843 | 0.2670 |
| 0.2015 | 24.0 | 2688 | 0.6621 | 0.2738 |
| 0.215 | 25.0 | 2800 | 0.6068 | 0.2652 |
| 0.1859 | 26.0 | 2912 | 0.6136 | 0.2570 |
| 0.1745 | 27.0 | 3024 | 0.6191 | 0.2624 |
| 0.1611 | 28.0 | 3136 | 0.6364 | 0.2578 |
| 0.1513 | 29.0 | 3248 | 0.6402 | 0.2535 |
| 0.172 | 30.0 | 3360 | 0.6330 | 0.2500 |
| 0.1488 | 31.0 | 3472 | 0.6275 | 0.2521 |
| 0.1371 | 32.0 | 3584 | 0.6539 | 0.2540 |
| 0.1356 | 33.0 | 3696 | 0.6544 | 0.2491 |
| 0.1319 | 34.0 | 3808 | 0.6545 | 0.2491 |
| 0.1465 | 35.0 | 3920 | 0.6573 | 0.2495 |
| 0.13 | 36.0 | 4032 | 0.6594 | 0.2494 |
| 0.1244 | 37.0 | 4144 | 0.6651 | 0.2476 |
| 0.1228 | 38.0 | 4256 | 0.6754 | 0.2497 |
| 0.1181 | 39.0 | 4368 | 0.6684 | 0.2468 |
| 0.1338 | 40.0 | 4480 | 0.6713 | 0.2471 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
MahmoudAbdullah99/wav2vec-speech-model | 0296692bb61fa5ab33932fa969b74daad7fd5443 | 2022-06-26T17:22:45.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | MahmoudAbdullah99 | null | MahmoudAbdullah99/wav2vec-speech-model | 0 | null | transformers | 38,356 | |
mohsenfayyaz/bert-base-parsbert-uncased_pquad_1epoch | c1f5f4bfc2bc89e4e56c8c901b6ac9423a02d3d7 | 2022-06-24T12:32:52.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mohsenfayyaz | null | mohsenfayyaz/bert-base-parsbert-uncased_pquad_1epoch | 0 | null | transformers | 38,357 | Entry not found |
gianlab/swin-tiny-patch4-window7-224-finetuned-plantdisease | 54c7fb4bd091d853f2e755c12caf9e1b300fc6be | 2022-06-28T11:19:09.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:imagefolder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | gianlab | null | gianlab/swin-tiny-patch4-window7-224-finetuned-plantdisease | 0 | null | transformers | 38,358 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-plantdisease
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9689922480620154
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-plantdisease
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1032
- Accuracy: 0.9690
## Model description
This model was created by importing the dataset of the photos of diseased plants into Google Colab from kaggle here: https://www.kaggle.com/datasets/emmarex/plantdisease. I then used the image classification tutorial here: https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb
The possible classified diseases are: Tomato Tomato YellowLeaf Curl Virus , Tomato Late blight ,
Pepper bell Bacterial spot, Tomato Early blight, Potato healthy, Tomato healthy , Tomato Target_Spot , Potato Early blight , Tomato Tomato mosaic virus, Pepper bell healthy, Potato Late blight,
Tomato Septoria leaf spot , Tomato Leaf Mold , Tomato Spider mites Two spotted spider mite , Tomato Bacterial spot .
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1903 | 1.0 | 145 | 0.1032 | 0.9690 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
tlin123/DialoGPT-Bopy-Alpha-1.04 | cc054ad7689739f9c167ecc34d951dc29f86b812 | 2022-06-24T18:02:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | tlin123 | null | tlin123/DialoGPT-Bopy-Alpha-1.04 | 0 | null | transformers | 38,359 | Entry not found |
jwuthri/distilbert-base-uncased-finetuned-imdb | 4452d5ec38be3cea22669e67fd133f6930a587e3 | 2022-06-25T05:46:38.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | jwuthri | null | jwuthri/distilbert-base-uncased-finetuned-imdb | 0 | null | transformers | 38,360 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7046 | 1.0 | 157 | 2.4782 |
| 2.5679 | 2.0 | 314 | 2.4108 |
| 2.5028 | 3.0 | 471 | 2.4121 |
| 2.4825 | 4.0 | 628 | 2.3589 |
| 2.4593 | 5.0 | 785 | 2.4074 |
| 2.4294 | 6.0 | 942 | 2.3742 |
| 2.4258 | 7.0 | 1099 | 2.3706 |
| 2.4152 | 8.0 | 1256 | 2.3315 |
| 2.409 | 9.0 | 1413 | 2.3809 |
| 2.3908 | 10.0 | 1570 | 2.3394 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
shuidun/test1 | 5fcb99df08e526f34254ea71491d25985488bbd4 | 2022-06-25T04:04:42.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | shuidun | null | shuidun/test1 | 0 | null | transformers | 38,361 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: test1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
imxly/t5-copy-med-qa | c737c7f3fb0321b6749f8e7a1269ae76a140e334 | 2022-06-25T14:10:38.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | imxly | null | imxly/t5-copy-med-qa | 0 | 1 | transformers | 38,362 | Entry not found |
gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v3 | b5e33f8cf803a8176d6eb46ec43aba5dae1b4efb | 2022-06-26T06:15:42.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v3 | 0 | null | transformers | 38,363 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v3
This model is a fine-tuned version of [gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v2](https://huggingface.co/gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v2) on the GARY109/AI_LIGHT_DANCE - ONSET-STEPMANIA2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0734
- Wer: 0.6928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9189 | 1.0 | 188 | 1.0770 | 0.7002 |
| 0.9172 | 2.0 | 376 | 1.0780 | 0.6955 |
| 0.9177 | 3.0 | 564 | 1.0824 | 0.6916 |
| 0.9184 | 4.0 | 752 | 1.0734 | 0.6928 |
| 0.9072 | 5.0 | 940 | 1.0841 | 0.6897 |
| 0.9089 | 6.0 | 1128 | 1.0788 | 0.6870 |
| 0.9174 | 7.0 | 1316 | 1.0761 | 0.6856 |
| 0.9072 | 8.0 | 1504 | 1.0776 | 0.6850 |
| 0.9079 | 9.0 | 1692 | 1.0795 | 0.6852 |
| 0.9016 | 10.0 | 1880 | 1.0817 | 0.6850 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
rpgz31/jibber | 6fff0afb3fcd50332c7b9c01bda5cf687f7b9699 | 2022-06-25T18:00:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"dataset:bittensor",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | rpgz31 | null | rpgz31/jibber | 0 | null | transformers | 38,364 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- bittensor
metrics:
- accuracy
model-index:
- name: test-clm
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: bittensor train-v1.1.json
type: bittensor
args: train-v1.1.json
metrics:
- name: Accuracy
type: accuracy
value: 0.13872832369942195
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-clm
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the bittensor train-v1.1.json dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5199
- Accuracy: 0.1387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
sohomghosh/LIPI_FinSim4_ESG_task2 | 9b1a74b855e1fb96a62cf542b05c7e3f08ff4090 | 2022-06-28T01:50:57.000Z | [
"pytorch",
"license:mit"
] | null | false | sohomghosh | null | sohomghosh/LIPI_FinSim4_ESG_task2 | 0 | null | null | 38,365 | ---
license: mit
---
How to use ths model?
Download the pytorch_model.bin file and execute the following:
```python
import pandas as pd
import torch
import transformers
from torch.utils.data import Dataset, DataLoader
from transformers import RobertaModel, RobertaTokenizer, BertModel, BertTokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
MAX_LEN = 128
BATCH_SIZE = 20
text_col_name = 'sentence'
category_col = 'label_text'
#Input should be one dataframe having one column with header as 'sentence' : test_df (do reset_index() if needed)
test_df = pd.DataFrame({"sentence":['We are striving to reduce the amount of waste we produce, and to reduce water as well as paper consumption.']})
def scoring_data_prep(dataset):
out = []
target = []
mask = []
for i in range(len(dataset)):
rec = dataset[i]
out.append(rec['ids'].reshape(-1,MAX_LEN))
mask.append(rec['mask'].reshape(-1,MAX_LEN))
out_stack = torch.cat(out, dim = 0)
mask_stack = torch.cat(mask, dim =0 )
out_stack = out_stack.to(device, dtype = torch.long)
mask_stack = mask_stack.to(device, dtype = torch.long)
return out_stack, mask_stack
class Triage(Dataset):
"""
This is a subclass of torch packages Dataset class. It processes input to create ids, masks and targets required for model training.
"""
def __init__(self, dataframe, tokenizer, max_len, text_col_name):
self.len = len(dataframe)
self.data = dataframe
self.tokenizer = tokenizer
self.max_len = max_len
self.text_col_name = text_col_name
def __getitem__(self, index):
title = str(self.data[self.text_col_name][index])
title = " ".join(title.split())
inputs = self.tokenizer.encode_plus(
title,
None,
add_special_tokens=True,
max_length=self.max_len,
pad_to_max_length=True,
return_token_type_ids=True,
truncation=True,
)
ids = inputs["input_ids"]
mask = inputs["attention_mask"]
return {
"ids": torch.tensor(ids, dtype=torch.long),
"mask": torch.tensor(mask, dtype=torch.long),
}
def __len__(self):
return self.len
class BERTClass(torch.nn.Module):
def __init__(self, num_class):
super(BERTClass, self).__init__()
self.num_class = num_class
self.l1 = RobertaModel.from_pretrained("roberta-base")
self.pre_classifier = torch.nn.Linear(768, 768)
self.dropout = torch.nn.Dropout(0.3)
self.classifier = torch.nn.Linear(768, self.num_class)
self.history = dict()
def forward(self, input_ids, attention_mask):
output_1 = self.l1(input_ids=input_ids, attention_mask=attention_mask)
hidden_state = output_1[0]
pooler = hidden_state[:, 0]
pooler = self.pre_classifier(pooler)
pooler = torch.nn.ReLU()(pooler)
pooler = self.dropout(pooler)
output = self.classifier(pooler)
return output
def do_predict(model, tokenizer, test_df):
test_set = Triage(test_df, tokenizer, MAX_LEN, text_col_name)
test_params = {'batch_size' : BATCH_SIZE, 'shuffle': False, 'num_workers':0}
test_loader = DataLoader(test_set, **test_params)
out_stack, mask_stack = scoring_data_prep(dataset = test_set)
n = 0
combined_output = []
model.eval()
with torch.no_grad():
while n < test_df.shape[0]:
output = model(out_stack[n:n+BATCH_SIZE,:],mask_stack[n:n+BATCH_SIZE,:])
n = n + BATCH_SIZE
combined_output.append(output)
combined_output = torch.cat(combined_output, dim = 0)
preds = torch.argsort(combined_output, axis = 1, descending = True)
preds = preds.to('cpu')
actual_predictions = [i[0] for i in preds.tolist()]
return actual_predictions
model_sustain = BERTClass(2)
model_sustain.to(device)
model_sustain.load_state_dict(torch.load('pytorch_model.bin', map_location=device)['model_state_dict'])
tokenizer_sus = BertTokenizer.from_pretrained('roberta-base')
actual_predictions_sus = do_predict(model_sustain, tokenizer_sus, test_df)
test_df['sustainability'] = ['sustainable' if i==0 else 'unsustainable' for i in actual_predictions_read]
```
Our work can be cited as follows:
```bibtex
@inproceedings{ghosh-2022-finsim-esg,
title = "Ranking Environment, Social And Governance Related Concepts And Assessing Sustainability Aspect Of Financial Texts",
author={Ghosh, Sohom and Naskar, Sudip Kumar},
booktitle = "Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP@IJCAI-ECAI 2022)",
month = "July" ,
year = "2022",
address = "Vienna, Austria",
publisher = "-",
url = "https://mx.nthu.edu.tw/~chungchichen/FinNLP2022_IJCAI/14.pdf",
pages = "87--92",
}
``` |
sohomghosh/finrad_model | 146acc8a90b57c3b27524e00f28efb91b6f0aa14 | 2022-06-28T01:50:47.000Z | [
"pytorch",
"license:mit"
] | null | false | sohomghosh | null | sohomghosh/finrad_model | 0 | null | null | 38,366 | ---
license: mit
---
How to load the model and generate predictions?
Download the pytorch_model.bin file and execute the following:
```python
import pandas as pd
import torch
import transformers
from torch.utils.data import Dataset, DataLoader
from transformers import RobertaModel, RobertaTokenizer, BertModel, BertTokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
MAX_LEN = 128
BATCH_SIZE = 20
text_col_name = 'sentence'
category_col = 'label_text'
#Input should be one dataframe having one column with header as 'sentence' : test_df (do reset_index() if needed)
test_df = pd.DataFrame({"sentence":['a general increase in prices and fall in the purchasing value of money.']})
def scoring_data_prep(dataset):
out = []
target = []
mask = []
for i in range(len(dataset)):
rec = dataset[i]
out.append(rec['ids'].reshape(-1,MAX_LEN))
mask.append(rec['mask'].reshape(-1,MAX_LEN))
out_stack = torch.cat(out, dim = 0)
mask_stack = torch.cat(mask, dim =0 )
out_stack = out_stack.to(device, dtype = torch.long)
mask_stack = mask_stack.to(device, dtype = torch.long)
return out_stack, mask_stack
class Triage(Dataset):
"""
This is a subclass of torch packages Dataset class. It processes input to create ids, masks and targets required for model training.
"""
def __init__(self, dataframe, tokenizer, max_len, text_col_name):
self.len = len(dataframe)
self.data = dataframe
self.tokenizer = tokenizer
self.max_len = max_len
self.text_col_name = text_col_name
def __getitem__(self, index):
title = str(self.data[self.text_col_name][index])
title = " ".join(title.split())
inputs = self.tokenizer.encode_plus(
title,
None,
add_special_tokens=True,
max_length=self.max_len,
pad_to_max_length=True,
return_token_type_ids=True,
truncation=True,
)
ids = inputs["input_ids"]
mask = inputs["attention_mask"]
return {
"ids": torch.tensor(ids, dtype=torch.long),
"mask": torch.tensor(mask, dtype=torch.long),
}
def __len__(self):
return self.len
class BERTClass(torch.nn.Module):
def __init__(self, num_class):
super(BERTClass, self).__init__()
self.num_class = num_class
self.l1 = BertModel.from_pretrained("ProsusAI/finbert")
self.pre_classifier = torch.nn.Linear(768, 768)
self.dropout = torch.nn.Dropout(0.3)
self.classifier = torch.nn.Linear(768, self.num_class)
self.history = dict()
def forward(self, input_ids, attention_mask):
output_1 = self.l1(input_ids=input_ids, attention_mask=attention_mask)
hidden_state = output_1[0]
pooler = hidden_state[:, 0]
pooler = self.pre_classifier(pooler)
pooler = torch.nn.ReLU()(pooler)
pooler = self.dropout(pooler)
output = self.classifier(pooler)
return output
def do_predict(model, tokenizer, test_df):
test_set = Triage(test_df, tokenizer, MAX_LEN, text_col_name)
test_params = {'batch_size' : BATCH_SIZE, 'shuffle': False, 'num_workers':0}
test_loader = DataLoader(test_set, **test_params)
out_stack, mask_stack = scoring_data_prep(dataset = test_set)
n = 0
combined_output = []
model.eval()
with torch.no_grad():
while n < test_df.shape[0]:
output = model(out_stack[n:n+BATCH_SIZE,:],mask_stack[n:n+BATCH_SIZE,:])
n = n + BATCH_SIZE
combined_output.append(output)
combined_output = torch.cat(combined_output, dim = 0)
preds = torch.argsort(combined_output, axis = 1, descending = True)
preds = preds.to('cpu')
actual_predictions = [i[0] for i in preds.tolist()]
return actual_predictions
model_read = BERTClass(2)
model_read.to(device)
model_read.load_stat_dict(torch.load('pytorch_model.bin', map_location=device)['model_state_dict'])
tokenizer_read = BertTokenizer.from_pretrained('ProsusAI/finbert')
actual_predictions_read = do_predict(model_read, tokenizer_read, test_df)
test_df['readability'] = ['readable' if i==1 else 'not_reabale' for i in actual_predictions_read]
```
```bibtex
@InProceedings{ghosh-EtAl:2022:FNP,
author = {Ghosh, Sohom and Sengupta, Shovon and Naskar, Sudip and Singh, Sunny Kumar},
title = {FinRAD: Financial Readability Assessment Dataset - 13,000+ Definitions of Financial Terms for Measuring Readability},
booktitle = {Proceedings of the The 4th Financial Narrative Processing Workshop @LREC2022},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {1--9},
url = {http://www.lrec-conf.org/proceedings/lrec2022/workshops/FNP/pdf/2022.fnp-1.1.pdf}
}
```
```bibtex
@InProceedings{ghosh-2021-finread,
title = "FinRead: A Transfer Learning Based Tool to Assess Readability of Definitions of Financial Terms",
author = "Sohom Ghosh, Shovon Sengupta, Sudip Kumar Naskar, Sunny Kumar Singh",
booktitle = "Proceedings of the 18th International Conference on Natural Language Processing (ICON) :
System Demonstrations",
month = "dec",
year = "2021",
publisher = "NLP Association of India (NLPAI)",
url = "forthcoming",
intype = {to appear in},
pre-print = "https://easychair.org/publications/preprint/1wvS"
}
``` |
mbshr/urt5-base-init | f894fa5765485d36eaf43d9d4762e2b2bcf2e76f | 2022-06-26T15:23:51.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mbshr | null | mbshr/urt5-base-init | 0 | null | transformers | 38,367 | Entry not found |
Rami/qa-adhd | 3813c708ff1ae7a75a884178e59d18d43b300554 | 2022-06-29T01:41:36.000Z | [
"pytorch",
"license:mit"
] | null | false | Rami | null | Rami/qa-adhd | 0 | null | null | 38,368 | ---
license: mit
widget:
- text: "Jens Peter Hansen kommer fra Danmark"
---
|
zyxzyx/autotrain-sum-1042335811 | 9c0ce350fb3876d5b0f60f566c48eea5979179c2 | 2022-06-27T05:15:17.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"zh",
"dataset:zyxzyx/autotrain-data-sum",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | zyxzyx | null | zyxzyx/autotrain-sum-1042335811 | 0 | null | transformers | 38,369 | ---
tags: autotrain
language: zh
widget:
- text: "I love AutoTrain 🤗"
datasets:
- zyxzyx/autotrain-data-sum
co2_eq_emissions: 426.15271368095927
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1042335811
- CO2 Emissions (in grams): 426.15271368095927
## Validation Metrics
- Loss: 1.7748287916183472
- Rouge1: 0.536
- Rouge2: 0.0
- RougeL: 0.536
- RougeLsum: 0.536
- Gen Len: 10.9089
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/zyxzyx/autotrain-sum-1042335811
``` |
hamziqureshi/t5-small-finetuned-amazon-en-es | 6877badae712f3ad040f21ff996f53bddee86046 | 2022-06-27T13:49:14.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | hamziqureshi | null | hamziqureshi/t5-small-finetuned-amazon-en-es | 0 | null | transformers | 38,370 | Entry not found |
nizamudma/bart_cnn_auto | 75d4d23638243fb698d615530073e60568b4b414 | 2022-06-29T14:15:25.000Z | [
"pytorch",
"bart",
"text2text-generation",
"unk",
"dataset:nizamudma/autotrain-data-text1",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | nizamudma | null | nizamudma/bart_cnn_auto | 0 | null | transformers | 38,371 | |
huggingtweets/reallifemera | a4383d977e54e77272c3d455fbae2d4660768526 | 2022-06-29T04:14:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/reallifemera | 0 | null | transformers | 38,372 | ---
language: en
thumbnail: http://www.huggingtweets.com/reallifemera/1656476064337/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1525581631020576771/qgSl4j4O_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mera Brown</div>
<div style="text-align: center; font-size: 14px;">@reallifemera</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mera Brown.
| Data | Mera Brown |
| --- | --- |
| Tweets downloaded | 944 |
| Retweets | 22 |
| Short tweets | 98 |
| Tweets kept | 824 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wqhoe3wp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @reallifemera's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/nuhzlovs) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/nuhzlovs/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/reallifemera')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
sumitrsch/muril_base_multiconer22_hi | 8c0ee6ab8e2caacd78ebed3a909219151c813470 | 2022-07-06T12:27:42.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | token-classification | false | sumitrsch | null | sumitrsch/muril_base_multiconer22_hi | 0 | 2 | transformers | 38,373 | ---
license: afl-3.0
---
Put this model path in variable best_model_path in first cell of given colab notebook for testing semeval multiconer task. https://colab.research.google.com/drive/17WyqwdoRNnzImeik6wTRE5uuj9QQnkXA#scrollTo=nYtUtmyDFAqP |
huggingtweets/gregorian000-levelsio | 63b772d337386a52ad818c32d74be165d2595064 | 2022-06-28T13:11:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/gregorian000-levelsio | 0 | null | transformers | 38,374 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1501241215433510919/4GctQi3o_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1441044961957343232/Sl1U4tSw_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">David ⚡ & @levelsio</div>
<div style="text-align: center; font-size: 14px;">@gregorian000-levelsio</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from David ⚡ & @levelsio.
| Data | David ⚡ | @levelsio |
| --- | --- | --- |
| Tweets downloaded | 95 | 3250 |
| Retweets | 22 | 176 |
| Short tweets | 9 | 556 |
| Tweets kept | 64 | 2518 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ozvo6hl5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gregorian000-levelsio's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1emg780i) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1emg780i/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gregorian000-levelsio')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/g__j | d44ecf727955f45f4ea508c0a5fe140e5d58d2b5 | 2022-06-28T13:36:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/g__j | 0 | null | transformers | 38,375 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/959389610978742273/jfOMGQ1B_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Greg Jackson</div>
<div style="text-align: center; font-size: 14px;">@g__j</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Greg Jackson.
| Data | Greg Jackson |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 187 |
| Short tweets | 179 |
| Tweets kept | 2884 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2sl53oes/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @g__j's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/stwh74do) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/stwh74do/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/g__j')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
moonzi/distilbert-base-uncased-finetuned-imdb | fed7092526179140bf68df13ae2cb3603eb72203 | 2022-06-28T13:46:11.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | moonzi | null | moonzi/distilbert-base-uncased-finetuned-imdb | 0 | null | transformers | 38,376 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6898 | 1.0 | 157 | 2.5423 |
| 2.5746 | 2.0 | 314 | 2.4453 |
| 2.5548 | 3.0 | 471 | 2.4528 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
rishiyoung/xlm-roberta-base-finetuned-panx-de | 082a2163e3179c5a2728bfb4bfde6fc39cc8e82c | 2022-06-28T20:49:34.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | rishiyoung | null | rishiyoung/xlm-roberta-base-finetuned-panx-de | 0 | null | transformers | 38,377 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8620945214069894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jdang/dummy-model | 325184a0037376ed224b886ee6eb70d6f63596d5 | 2022-06-29T00:30:36.000Z | [
"pytorch",
"camembert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | jdang | null | jdang/dummy-model | 0 | null | transformers | 38,378 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# DistilBERT base model (dummy test)
This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-uncased). It was
introduced in [this paper](https://arxiv.org/abs/1910.01108). The code for the distillation process can be found
[here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation). This model is uncased: it does
not make a difference between english and English.
## Model description
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained
with three objectives:
- Distillation loss: the model was trained to return the same probabilities as the BERT base model.
- Masked language modeling (MLM): this is part of the original training loss of the BERT base model. When taking a
sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the
model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that
usually see the words one after the other, or from autoregressive models like GPT which internally mask the future
tokens. It allows the model to learn a bidirectional representation of the sentence.
- Cosine embedding loss: the model was also trained to generate hidden states as close as possible as the BERT base
model.
This way, the model learns the same inner representation of the English language than its teacher model, while being
faster for inference or downstream tasks.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=distilbert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.05292855575680733,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.03968575969338417,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a business model. [SEP]",
'score': 0.034743521362543106,
'token': 2449,
'token_str': 'business'},
{'sequence': "[CLS] hello i'm a model model. [SEP]",
'score': 0.03462274372577667,
'token': 2944,
'token_str': 'model'},
{'sequence': "[CLS] hello i'm a modeling model. [SEP]",
'score': 0.018145186826586723,
'token': 11643,
'token_str': 'modeling'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import DistilBertTokenizer, DistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertModel.from_pretrained("distilbert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import DistilBertTokenizer, TFDistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = TFDistilBertModel.from_pretrained("distilbert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
[the bias of its teacher model](https://huggingface.co/bert-base-uncased#limitations-and-bias).
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
>>> unmasker("The White man worked as a [MASK].")
[{'sequence': '[CLS] the white man worked as a blacksmith. [SEP]',
'score': 0.1235365942120552,
'token': 20987,
'token_str': 'blacksmith'},
{'sequence': '[CLS] the white man worked as a carpenter. [SEP]',
'score': 0.10142576694488525,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the white man worked as a farmer. [SEP]',
'score': 0.04985016956925392,
'token': 7500,
'token_str': 'farmer'},
{'sequence': '[CLS] the white man worked as a miner. [SEP]',
'score': 0.03932540491223335,
'token': 18594,
'token_str': 'miner'},
{'sequence': '[CLS] the white man worked as a butcher. [SEP]',
'score': 0.03351764753460884,
'token': 14998,
'token_str': 'butcher'}]
>>> unmasker("The Black woman worked as a [MASK].")
[{'sequence': '[CLS] the black woman worked as a waitress. [SEP]',
'score': 0.13283951580524445,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the black woman worked as a nurse. [SEP]',
'score': 0.12586183845996857,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the black woman worked as a maid. [SEP]',
'score': 0.11708822101354599,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the black woman worked as a prostitute. [SEP]',
'score': 0.11499975621700287,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the black woman worked as a housekeeper. [SEP]',
'score': 0.04722772538661957,
'token': 22583,
'token_str': 'housekeeper'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset
consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia)
(excluding lists, tables and headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 16 GB V100 for 90 hours. See the
[training code](https://github.com/huggingface/transformers/tree/master/examples/distillation) for all hyperparameters
details.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
| | 82.2 | 88.5 | 89.2 | 91.3 | 51.3 | 85.8 | 87.5 | 59.9 |
### BibTeX entry and citation info
```bibtex
@article{Sanh2019DistilBERTAD,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
journal={ArXiv},
year={2019},
volume={abs/1910.01108}
}
```
<a href="https://huggingface.co/exbert/?model=distilbert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
TinFernandez/dummy | c75d869ea39e8703d8ff7f9b2876e6ae1a048b95 | 2022-07-04T10:20:59.000Z | [
"pytorch",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"exbert",
"license:apache-2.0"
] | null | false | TinFernandez | null | TinFernandez/dummy | 0 | null | null | 38,379 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
jinjinjin/SLRelV7_TriBert | 8c54df725b64050f9f39c7611c711789f39bfdbf | 2022-07-21T08:19:35.000Z | [
"pytorch"
] | null | false | jinjinjin | null | jinjinjin/SLRelV7_TriBert | 0 | null | null | 38,380 | Entry not found |
harunkuf/mlsum_tr_en_mt5-small | f30e20af9d83a6012e54050a1c3af57262e7bcc4 | 2022-06-29T15:50:56.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | harunkuf | null | harunkuf/mlsum_tr_en_mt5-small | 0 | null | transformers | 38,381 | # Multilingual mT5 model trained with MLSUM_TR and MLSUM_CNN (EN)
## Results:
MLSUM_TR:
* Rouge-1: 45.11
* Rouge-2: 30.96
* Rouge-L: 39.23
MLSUM_CNN:
* Rouge-1: 39.65
* Rouge-2: 17.49
* Rouge-L: 27.66
Note: Huggingface Inference API truncates the results, which results in unfinished sentences when making a prediction.
You can try the model in Colab: https://colab.research.google.com/drive/1QDWO3RHjjP1nS8bIvhT38B3fVIBC3TaK?usp=sharing |
smeoni/apericube | bbafbad4a3e50bf8d4119b3c17f843f16574e238 | 2022-06-29T08:54:45.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | smeoni | null | smeoni/apericube | 0 | null | transformers | 38,382 | Entry not found |
anahitapld/xlnet-base-dbd | 7898795cbdff1aa7cac829025ecdbd8b6ca83e46 | 2022-06-29T09:01:34.000Z | [
"pytorch",
"license:apache-2.0"
] | null | false | anahitapld | null | anahitapld/xlnet-base-dbd | 0 | null | null | 38,383 | ---
license: apache-2.0
---
|
SivilTaram/poet-sql-digit-finetuned-drop | 81599d649cee37e8f7acf2716415b02043ed3444 | 2022-06-29T09:13:56.000Z | [
"pytorch",
"license:mit"
] | null | false | SivilTaram | null | SivilTaram/poet-sql-digit-finetuned-drop | 0 | null | null | 38,384 | ---
license: mit
---
|
SivilTaram/poet-math-digit | 4a62650e655dbcdd33bcb41f0ec421e5411cfddc | 2022-06-29T09:11:06.000Z | [
"pytorch",
"license:mit"
] | null | false | SivilTaram | null | SivilTaram/poet-math-digit | 0 | null | null | 38,385 | ---
license: mit
---
|
radi-cho/poetry-bg | 45ca1cf2cd3b2637984aadb961003a6f3db406f0 | 2022-07-04T08:33:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"bg",
"dataset:chitanka",
"transformers",
"torch",
"license:apache-2.0"
] | text-generation | false | radi-cho | null | radi-cho/poetry-bg | 0 | null | transformers | 38,386 | ---
license: apache-2.0
language:
- bg
datasets:
- chitanka
tags:
- torch
inference: false
---
# Bulgarian language poetry generation
Pretrained model using causal language modeling (CLM) objective based on [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). <br/>
Developed by [Radostin Cholakov](https://www.linkedin.com/in/radostin-cholakov-bb4422146/) as a part of the [AzBuki.ML](https://azbuki-ml.com) initiatives.
# How to use?
```python
>>> from transformers import AutoModel, AutoTokenizer
>>>
>>> model_id = "radi-cho/poetry-bg"
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>> model = AutoModel.from_pretrained(model_id, trust_remote_code=True)
>>>
>>> input_ids = tokenizer.encode(
>>> "[HED]Суетата на живота[NEL][BDY]",
>>> add_special_tokens=False,
>>> return_tensors='pt')
>>>
>>> output_ids = model.generate(
>>> input_ids,
>>> do_sample=True,
>>> max_length=250,
>>> top_p=0.98,
>>> top_k=0,
>>> pad_token_id=2,
>>> eos_token_id=50258)
>>>
>>> output = tokenizer.decode(output_ids[0])
>>>
>>> output = output.replace('[NEL]', '\n')
>>> output = output.replace('[BDY]', '\n')
>>> output = output.replace('[HED]', '')
>>> output = output.replace('[SEP]', '')
>>>
>>> print(output)
Суетата на живота
Да страдам ли?
Да страдам ли за това?
Не, не за това, че умирам...
Но само за това,
че миговете ми са рани.
Аз съм сам и търся утеха.
```
# Custom Tokens
We introduced 3 custom tokens in the tokenizer - `[NEL]`, `[BDY]`, `[HED]`
- `[HED]` denotes where the title of the poem begins;
- `[BDY]` denotes where the body of the poem begins;
- `[NEL]` marks the end of a verse and should be decoded as a new line;
`[SEP]` (with id 50258) is the *end of sequence* token.
# Credits
- Inspired by [rmihaylov/gpt2-medium-bg](https://huggingface.co/rmihaylov/gpt2-medium-bg).
- Data: [https://chitanka.info/texts/type/poetry](https://chitanka.info/texts/type/poetry); |
k3nneth/xlm-roberta-base-finetuned-panx-de | 2967e35065b5c167174067a5ed56ebc123c17075 | 2022-06-29T16:50:45.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | k3nneth | null | k3nneth/xlm-roberta-base-finetuned-panx-de | 0 | null | transformers | 38,387 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8627004891366169
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1363
- F1: 0.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2539 | 1.0 | 525 | 0.1697 | 0.8179 |
| 0.1317 | 2.0 | 1050 | 0.1327 | 0.8516 |
| 0.0819 | 3.0 | 1575 | 0.1363 | 0.8627 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
k3nneth/xlm-roberta-base-finetuned-panx-de-fr | 0aa87ee57fb6d5788d2d84eb85c3c3aa62df9f5b | 2022-06-29T17:16:43.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | k3nneth | null | k3nneth/xlm-roberta-base-finetuned-panx-de-fr | 0 | null | transformers | 38,388 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1644
- F1: 0.8617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 |
| 0.1471 | 2.0 | 1430 | 0.1627 | 0.8509 |
| 0.0947 | 3.0 | 2145 | 0.1644 | 0.8617 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
mystery/DialoGPT-small-pinkiepie | 8a1ad39bf2a097d7a9e07aecbef5f17fb2ff796c | 2022-06-29T17:45:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | mystery | null | mystery/DialoGPT-small-pinkiepie | 0 | null | transformers | 38,389 | |
SivilTaram/poet-sql-finetuned-hotpotqa | 68dfe85eb0497f4939b73087730fb2ae03f2568d | 2022-06-30T08:12:43.000Z | [
"pytorch",
"license:mit"
] | null | false | SivilTaram | null | SivilTaram/poet-sql-finetuned-hotpotqa | 0 | null | null | 38,390 | ---
license: mit
---
|
SivilTaram/tapex-t5-base-lm-adapt | 4ee3013721a66137aea5f1c47c3a1b19f2357a1f | 2022-06-30T08:16:24.000Z | [
"pytorch",
"license:mit"
] | null | false | SivilTaram | null | SivilTaram/tapex-t5-base-lm-adapt | 0 | null | null | 38,391 | ---
license: mit
---
|
imxly/ernie-health | 8b94e338a5b968289fa2621c91fb47a20865d072 | 2022-06-30T10:32:57.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | imxly | null | imxly/ernie-health | 0 | null | transformers | 38,392 | Entry not found |
fujiki/gpt-neo-en2ja-1b | 48d86813e0b5a2cc546c49bae6fc61067dba94a8 | 2022-06-30T09:46:07.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"license:afl-3.0"
] | text-generation | false | fujiki | null | fujiki/gpt-neo-en2ja-1b | 0 | null | transformers | 38,393 | ---
license: afl-3.0
---
|
huggingtweets/lewisnwatson | 70d9feb37e0d8a6c8fcddfa2566a1642a798b330 | 2022-06-30T20:54:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/lewisnwatson | 0 | 1 | transformers | 38,394 | ---
language: en
thumbnail: http://www.huggingtweets.com/lewisnwatson/1656622460314/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1509825675821301790/FCFan5I-_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lewis N Watson 🇺🇦</div>
<div style="text-align: center; font-size: 14px;">@lewisnwatson</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lewis N Watson 🇺🇦.
| Data | Lewis N Watson 🇺🇦 |
| --- | --- |
| Tweets downloaded | 1711 |
| Retweets | 797 |
| Short tweets | 211 |
| Tweets kept | 703 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/171yd33i/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lewisnwatson's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zds7e037) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zds7e037/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lewisnwatson')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
omunkhuush/roberta-base-ner-demo | d479cc32372e62dffdbd532f18bde136e0ce290e | 2022-07-01T04:00:33.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | omunkhuush | null | omunkhuush/roberta-base-ner-demo | 0 | null | transformers | 38,395 | Entry not found |
ganzorig/roberta-base-ner-demo | 527b79ae1eeb562d674fa83dfd5df5b0a46f47d5 | 2022-07-01T04:14:14.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ganzorig | null | ganzorig/roberta-base-ner-demo | 0 | null | transformers | 38,396 | Entry not found |
openclimatefix/graph-weather-forecaster-2.0deg-mini | 8cb62b0a9b21fa4d4131920426e26346e6d2cf8d | 2022-07-01T10:46:01.000Z | [
"pytorch"
] | null | false | openclimatefix | null | openclimatefix/graph-weather-forecaster-2.0deg-mini | 0 | null | null | 38,397 | Entry not found |
openclimatefix/graph-weather-forecaster-0.5deg-nolandsea-large | c23bc6b47fc58cb496b97d44befaca51e30097ef | 2022-07-01T13:09:13.000Z | [
"pytorch"
] | null | false | openclimatefix | null | openclimatefix/graph-weather-forecaster-0.5deg-nolandsea-large | 0 | null | null | 38,398 | Entry not found |
huggingtweets/lexisother | c6f1dff7d2101eafc1d1761398716675f8fde973 | 2022-07-01T18:02:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/lexisother | 0 | null | transformers | 38,399 | ---
language: en
thumbnail: http://www.huggingtweets.com/lexisother/1656698565003/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1226468832933564418/oZJzrVUq_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Alyxia Sother </div>
<div style="text-align: center; font-size: 14px;">@lexisother</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Alyxia Sother .
| Data | Alyxia Sother |
| --- | --- |
| Tweets downloaded | 601 |
| Retweets | 269 |
| Short tweets | 91 |
| Tweets kept | 241 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1hcphqun/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lexisother's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3759svle) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3759svle/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lexisother')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|