modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 00:47:30
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 00:47:21
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Jeska/BertjeWDialDataALL
|
Jeska
| 2021-12-03T22:10:31Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
model-index:
- name: BertjeWDialDataALL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALL
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.1739 | 1.0 | 1542 | 2.0150 |
| 2.0759 | 2.0 | 3084 | 1.9918 |
| 2.0453 | 3.0 | 4626 | 2.0132 |
| 1.9936 | 4.0 | 6168 | 1.9341 |
| 1.9659 | 5.0 | 7710 | 1.9140 |
| 1.9545 | 6.0 | 9252 | 1.9418 |
| 1.9104 | 7.0 | 10794 | 1.9179 |
| 1.8991 | 8.0 | 12336 | 1.9157 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ffsouza/t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
|
ffsouza
| 2021-12-03T21:45:00Z | 38 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16_en_ro_pre_processed",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- wmt16_en_ro_pre_processed
model-index:
- name: t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
rtoguchi/t5-small-finetuned-en-to-ro-fp16_off-lr_2e-7-weight_decay_0.001
|
rtoguchi
| 2021-12-03T19:24:15Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-fp16_off-lr_2e-7-weight_decay_0.001
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 4.7258
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-fp16_off-lr_2e-7-weight_decay_0.001
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4943
- Bleu: 4.7258
- Gen Len: 18.7149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 1.047 | 1.0 | 7629 | 1.4943 | 4.7258 | 18.7149 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jenspt/byt5_ft_all_clean_data_lr_1e4
|
jenspt
| 2021-12-03T18:11:12Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
per_device_train_batch_size=8, # batch size per device during training
#per_device_eval_batch_size=2, # batch size for evaluation
warmup_steps=3000, # number of warmup steps for learning rate scheduler (used to be 500)
weight_decay=0.01, # strength of weight decay
learning_rate=0.1e-3, # default = 5e-5=0.5e-4
logging_dir='./logs', # directory for storing logs
logging_steps=50,
#eval_steps = 100,
overwrite_output_dir = True,
save_strategy = 'epoch',
#logging_strategy = 'epoch',
)
|
ffsouza/t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.02-finetuned-en-to-ro
|
ffsouza
| 2021-12-03T16:07:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16_en_ro_pre_processed",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- wmt16_en_ro_pre_processed
metrics:
- bleu
model-index:
- name: t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.02-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16_en_ro_pre_processed
type: wmt16_en_ro_pre_processed
args: enro
metrics:
- name: Bleu
type: bleu
value: 0.0002
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.02-finetuned-en-to-ro
This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset.
It achieves the following results on the evaluation set:
- Loss: 6.4854
- Bleu: 0.0002
- Gen Len: 9.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 6.2568 | 1.0 | 76290 | 6.4854 | 0.0002 | 9.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
jenspt/byt5_ft_all_clean_data
|
jenspt
| 2021-12-03T13:32:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
per_device_train_batch_size=8, # batch size per device during training
#per_device_eval_batch_size=2, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler (used to be 500)
weight_decay=0.01, # strength of weight decay
#learning_rate=0.1e-3, # default = 5e-5=0.5e-4
logging_dir='./logs', # directory for storing logs
logging_steps=50,
#eval_steps = 100,
overwrite_output_dir = True,
save_strategy = 'epoch',
#logging_strategy = 'epoch',
)
|
rtoguchi/t5-small-finetuned-en-to-ro-fp16_off
|
rtoguchi
| 2021-12-03T13:18:24Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-fp16_off
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.3056
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-fp16_off
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4078
- Bleu: 7.3056
- Gen Len: 18.2556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6037 | 1.0 | 7629 | 1.4078 | 7.3056 | 18.2556 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
vicgalle/clip-vit-base-patch16-photo-critique
|
vicgalle
| 2021-12-03T10:05:09Z | 20 | 1 |
transformers
|
[
"transformers",
"jax",
"clip",
"zero-shot-image-classification",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2022-03-02T23:29:05Z |
CLIP model retrained over some subset of the DPC dataset
### Usage instructions
```
from transformers import AutoTokenizer, AutoModel, CLIPProcessor
tokenizer = AutoTokenizer.from_pretrained("vicgalle/clip-vit-base-patch16-photo-critique")
model = AutoModel.from_pretrained("vicgalle/clip-vit-base-patch16-photo-critique", from_flax=True)
processor = CLIPProcessor.from_pretrained("vicgalle/clip-vit-base-patch16-photo-critique")
```
|
danhsf/t5-small-finetuned-en-to-ro-lr_2e-3-fp_false
|
danhsf
| 2021-12-03T09:19:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-lr_2e-3-fp_false
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.1921
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-lr_2e-3-fp_false
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4239
- Bleu: 7.1921
- Gen Len: 18.2611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 0.8922 | 0.05 | 2000 | 1.7000 | 6.5274 | 18.2656 |
| 0.8621 | 0.1 | 4000 | 1.6409 | 6.6411 | 18.2311 |
| 0.8433 | 0.16 | 6000 | 1.6396 | 6.6601 | 18.2596 |
| 0.8297 | 0.21 | 8000 | 1.6304 | 6.7129 | 18.2581 |
| 0.8006 | 0.26 | 10000 | 1.6022 | 6.6067 | 18.2816 |
| 0.793 | 0.31 | 12000 | 1.5999 | 6.551 | 18.2631 |
| 0.774 | 0.37 | 14000 | 1.5586 | 6.7105 | 18.2661 |
| 0.7618 | 0.42 | 16000 | 1.5769 | 6.7278 | 18.2526 |
| 0.7463 | 0.47 | 18000 | 1.5625 | 6.6972 | 18.2201 |
| 0.7394 | 0.52 | 20000 | 1.5377 | 6.936 | 18.2491 |
| 0.7203 | 0.58 | 22000 | 1.5191 | 7.0205 | 18.2731 |
| 0.7158 | 0.63 | 24000 | 1.5055 | 6.835 | 18.2506 |
| 0.688 | 0.68 | 26000 | 1.4779 | 7.0534 | 18.2716 |
| 0.678 | 0.73 | 28000 | 1.4691 | 6.9735 | 18.2616 |
| 0.6677 | 0.79 | 30000 | 1.4702 | 7.0359 | 18.2496 |
| 0.6568 | 0.84 | 32000 | 1.4534 | 6.9982 | 18.2556 |
| 0.6475 | 0.89 | 34000 | 1.4427 | 7.0443 | 18.2466 |
| 0.6395 | 0.94 | 36000 | 1.4265 | 7.1205 | 18.2721 |
| 0.6319 | 1.0 | 38000 | 1.4239 | 7.1921 | 18.2611 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
nateraw/resnet50-oxford-iiit-pet
|
nateraw
| 2021-12-03T06:59:13Z | 82 | 0 |
timm
|
[
"timm",
"pytorch",
"image-classification",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for resnet50-oxford-iiit-pet

|
eliotm/t5-small-finetuned-en-to-ro-fp16_off
|
eliotm
| 2021-12-03T03:05:19Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-fp16_off
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 5.9132
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-fp16_off
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8351
- Bleu: 5.9132
- Gen Len: 18.2656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.8501 | 1.0 | 7629 | 1.8351 | 5.9132 | 18.2656 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
OscarNav/dialoGPT_translate
|
OscarNav
| 2021-12-03T01:30:17Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
# Finetuned DialoGPT model for Eng-Spa translation
DialoGPT-small model was used and finetuned on English to Spanish translations, extracted from http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip
some examples of translations
| Role | Response |
| :---: |------------------------|
| User | please, sing me a song |
| Bot | Por favor, canta una canción. |
| User | I really want to go to China |
| Bot | Realmente quiero ir a China. |
| User | Can you do me a favor? |
| Bot | ¿Me puedes hacer un favor? |
| User | I don't know what you are talking about |
| Bot | No sé de qué estás hablando. |
| User | I don't want to go to China |
| Bot | No quiero ir a China. |
# Using the model
example code for trying out the model
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('microsoft/DialoGPT-small')
model = AutoModelWithLMHead.from_pretrained('OscarNav/dialoGPT_translate')
# Let's traslate 5 sentences
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
new_user_input_ids, max_length=1000,
pad_token_id=tokenizer.eos_token_id,
top_p=0.92, top_k = 50
)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, new_user_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
alexrfelicio/t5-small-finetuned32-en-to-de
|
alexrfelicio
| 2021-12-02T22:39:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: t5-small-finetuned32-en-to-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned32-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 136 | 1.4226 | 21.9554 | 17.8089 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
gayanin/bart-mlm-pubmed-medterm
|
gayanin
| 2021-12-02T20:51:43Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-mlm-pubmed-medterm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-mlm-pubmed-medterm
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Rouge2 Precision: 0.985
- Rouge2 Recall: 0.7208
- Rouge2 Fmeasure: 0.8088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:------:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.0018 | 1.0 | 13833 | 0.0003 | 0.985 | 0.7208 | 0.8088 |
| 0.0014 | 2.0 | 27666 | 0.0006 | 0.9848 | 0.7207 | 0.8086 |
| 0.0009 | 3.0 | 41499 | 0.0002 | 0.9848 | 0.7207 | 0.8086 |
| 0.0007 | 4.0 | 55332 | 0.0002 | 0.985 | 0.7208 | 0.8088 |
| 0.0006 | 5.0 | 69165 | 0.0001 | 0.9848 | 0.7207 | 0.8087 |
| 0.0001 | 6.0 | 82998 | 0.0002 | 0.9846 | 0.7206 | 0.8086 |
| 0.0009 | 7.0 | 96831 | 0.0001 | 0.9848 | 0.7208 | 0.8087 |
| 0.0 | 8.0 | 110664 | 0.0000 | 0.9848 | 0.7207 | 0.8087 |
| 0.0001 | 9.0 | 124497 | 0.0000 | 0.985 | 0.7208 | 0.8088 |
| 0.0 | 10.0 | 138330 | 0.0000 | 0.985 | 0.7208 | 0.8088 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/angiejolielive
|
huggingtweets
| 2021-12-02T20:17:53Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/angiejolielive/1638476268574/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/817164380081180673/TJnt3Lxe_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Angelina Jolie</div>
<div style="text-align: center; font-size: 14px;">@angiejolielive</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Angelina Jolie.
| Data | Angelina Jolie |
| --- | --- |
| Tweets downloaded | 1118 |
| Retweets | 71 |
| Short tweets | 45 |
| Tweets kept | 1002 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3fb12gam/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @angiejolielive's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2g9ynpkt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2g9ynpkt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/angiejolielive')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
fse/fasttext-wiki-news-subwords-300
|
fse
| 2021-12-02T20:13:10Z | 0 | 2 | null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- glove
- gensim
- fse
---
# Fasttext
1 million word vectors trained on Wikipedia 2017, UMBC webbase corpus and statmt.org news dataset (16B tokens).
Read more:
* https://fasttext.cc/docs/en/english-vectors.html
|
fse/fasttext-crawl-subwords-300
|
fse
| 2021-12-02T20:06:16Z | 0 | 0 | null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- glove
- gensim
- fse
---
# Fasttext
2 million word vectors trained with subword information on Common Crawl (600B tokens).
Read more:
* https://fasttext.cc/docs/en/english-vectors.html
|
rtoguchi/t5-small-finetuned-en-to-ro-weight_decay_0.001
|
rtoguchi
| 2021-12-02T17:46:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-weight_decay_0.001
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.3524
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-weight_decay_0.001
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4509
- Bleu: 7.3524
- Gen Len: 18.2581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6488 | 1.0 | 7629 | 1.4509 | 7.3524 | 18.2581 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
fse/glove-wiki-gigaword-50
|
fse
| 2021-12-02T16:45:04Z | 0 | 1 | null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- glove
- gensim
- fse
---
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
fse/glove-wiki-gigaword-300
|
fse
| 2021-12-02T16:44:23Z | 0 | 5 | null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- glove
- gensim
- fse
---
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
fse/glove-twitter-50
|
fse
| 2021-12-02T16:41:57Z | 0 | 0 | null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- glove
- gensim
- fse
---
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
fse/glove-twitter-200
|
fse
| 2021-12-02T16:40:17Z | 0 | 1 | null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- glove
- gensim
- fse
---
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
fse/glove-twitter-100
|
fse
| 2021-12-02T16:39:20Z | 0 | 0 | null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- glove
- gensim
- fse
---
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
huggingtweets/derspiegel
|
huggingtweets
| 2021-12-02T16:13:08Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/derspiegel/1638461583796/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1214723509521387520/7UENeEVp_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">DER SPIEGEL</div>
<div style="text-align: center; font-size: 14px;">@derspiegel</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from DER SPIEGEL.
| Data | DER SPIEGEL |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 478 |
| Short tweets | 6 |
| Tweets kept | 2766 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2uv8zr0k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @derspiegel's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/i3q4xu9o) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/i3q4xu9o/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/derspiegel')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/jayalammar
|
huggingtweets
| 2021-12-02T15:51:33Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/jayalammar/1638460288971/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1325460517922729984/xDO9dBt-_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jay Alammar</div>
<div style="text-align: center; font-size: 14px;">@jayalammar</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jay Alammar.
| Data | Jay Alammar |
| --- | --- |
| Tweets downloaded | 692 |
| Retweets | 198 |
| Short tweets | 35 |
| Tweets kept | 459 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wf3zug3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jayalammar's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/hq8g8xlh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/hq8g8xlh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jayalammar')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
emrecan/bert-base-turkish-cased-allnli_tr
|
emrecan
| 2021-12-02T14:58:36Z | 19 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: mit
datasets:
- nli_tr
metrics:
- accuracy
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-turkish-cased_allnli_tr
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5771
- Accuracy: 0.7978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8559 | 0.03 | 1000 | 0.7577 | 0.6798 |
| 0.6612 | 0.07 | 2000 | 0.7263 | 0.6958 |
| 0.6115 | 0.1 | 3000 | 0.6431 | 0.7364 |
| 0.5916 | 0.14 | 4000 | 0.6347 | 0.7407 |
| 0.5719 | 0.17 | 5000 | 0.6317 | 0.7483 |
| 0.5575 | 0.2 | 6000 | 0.6034 | 0.7544 |
| 0.5521 | 0.24 | 7000 | 0.6148 | 0.7568 |
| 0.5393 | 0.27 | 8000 | 0.5931 | 0.7610 |
| 0.5382 | 0.31 | 9000 | 0.5866 | 0.7665 |
| 0.5306 | 0.34 | 10000 | 0.5881 | 0.7594 |
| 0.5295 | 0.37 | 11000 | 0.6120 | 0.7632 |
| 0.5225 | 0.41 | 12000 | 0.5620 | 0.7759 |
| 0.5112 | 0.44 | 13000 | 0.5641 | 0.7769 |
| 0.5133 | 0.48 | 14000 | 0.5571 | 0.7798 |
| 0.5023 | 0.51 | 15000 | 0.5719 | 0.7722 |
| 0.5017 | 0.54 | 16000 | 0.5482 | 0.7844 |
| 0.5111 | 0.58 | 17000 | 0.5503 | 0.7800 |
| 0.4929 | 0.61 | 18000 | 0.5502 | 0.7836 |
| 0.4923 | 0.65 | 19000 | 0.5424 | 0.7843 |
| 0.4894 | 0.68 | 20000 | 0.5417 | 0.7851 |
| 0.4877 | 0.71 | 21000 | 0.5514 | 0.7841 |
| 0.4818 | 0.75 | 22000 | 0.5494 | 0.7848 |
| 0.4898 | 0.78 | 23000 | 0.5450 | 0.7859 |
| 0.4823 | 0.82 | 24000 | 0.5417 | 0.7878 |
| 0.4806 | 0.85 | 25000 | 0.5354 | 0.7875 |
| 0.4779 | 0.88 | 26000 | 0.5338 | 0.7848 |
| 0.4744 | 0.92 | 27000 | 0.5277 | 0.7934 |
| 0.4678 | 0.95 | 28000 | 0.5507 | 0.7871 |
| 0.4727 | 0.99 | 29000 | 0.5603 | 0.7789 |
| 0.4243 | 1.02 | 30000 | 0.5626 | 0.7894 |
| 0.3955 | 1.05 | 31000 | 0.5324 | 0.7939 |
| 0.4022 | 1.09 | 32000 | 0.5322 | 0.7925 |
| 0.3976 | 1.12 | 33000 | 0.5450 | 0.7920 |
| 0.3913 | 1.15 | 34000 | 0.5464 | 0.7948 |
| 0.406 | 1.19 | 35000 | 0.5406 | 0.7958 |
| 0.3875 | 1.22 | 36000 | 0.5489 | 0.7878 |
| 0.4024 | 1.26 | 37000 | 0.5427 | 0.7925 |
| 0.3988 | 1.29 | 38000 | 0.5335 | 0.7904 |
| 0.393 | 1.32 | 39000 | 0.5415 | 0.7923 |
| 0.3988 | 1.36 | 40000 | 0.5385 | 0.7962 |
| 0.3912 | 1.39 | 41000 | 0.5383 | 0.7950 |
| 0.3949 | 1.43 | 42000 | 0.5415 | 0.7931 |
| 0.3902 | 1.46 | 43000 | 0.5438 | 0.7893 |
| 0.3948 | 1.49 | 44000 | 0.5348 | 0.7906 |
| 0.3921 | 1.53 | 45000 | 0.5361 | 0.7890 |
| 0.3944 | 1.56 | 46000 | 0.5419 | 0.7953 |
| 0.3959 | 1.6 | 47000 | 0.5402 | 0.7967 |
| 0.3926 | 1.63 | 48000 | 0.5429 | 0.7925 |
| 0.3854 | 1.66 | 49000 | 0.5346 | 0.7959 |
| 0.3864 | 1.7 | 50000 | 0.5241 | 0.7979 |
| 0.385 | 1.73 | 51000 | 0.5149 | 0.8002 |
| 0.3871 | 1.77 | 52000 | 0.5325 | 0.8002 |
| 0.3819 | 1.8 | 53000 | 0.5332 | 0.8022 |
| 0.384 | 1.83 | 54000 | 0.5419 | 0.7873 |
| 0.3899 | 1.87 | 55000 | 0.5225 | 0.7974 |
| 0.3894 | 1.9 | 56000 | 0.5358 | 0.7977 |
| 0.3838 | 1.94 | 57000 | 0.5264 | 0.7988 |
| 0.3881 | 1.97 | 58000 | 0.5280 | 0.7956 |
| 0.3756 | 2.0 | 59000 | 0.5601 | 0.7969 |
| 0.3156 | 2.04 | 60000 | 0.5936 | 0.7925 |
| 0.3125 | 2.07 | 61000 | 0.5898 | 0.7938 |
| 0.3179 | 2.11 | 62000 | 0.5591 | 0.7981 |
| 0.315 | 2.14 | 63000 | 0.5853 | 0.7970 |
| 0.3122 | 2.17 | 64000 | 0.5802 | 0.7979 |
| 0.3105 | 2.21 | 65000 | 0.5758 | 0.7979 |
| 0.3076 | 2.24 | 66000 | 0.5685 | 0.7980 |
| 0.3117 | 2.28 | 67000 | 0.5799 | 0.7944 |
| 0.3108 | 2.31 | 68000 | 0.5742 | 0.7988 |
| 0.3047 | 2.34 | 69000 | 0.5907 | 0.7921 |
| 0.3114 | 2.38 | 70000 | 0.5723 | 0.7937 |
| 0.3035 | 2.41 | 71000 | 0.5944 | 0.7955 |
| 0.3129 | 2.45 | 72000 | 0.5838 | 0.7928 |
| 0.3071 | 2.48 | 73000 | 0.5929 | 0.7949 |
| 0.3061 | 2.51 | 74000 | 0.5794 | 0.7967 |
| 0.3068 | 2.55 | 75000 | 0.5892 | 0.7954 |
| 0.3053 | 2.58 | 76000 | 0.5796 | 0.7962 |
| 0.3117 | 2.62 | 77000 | 0.5763 | 0.7981 |
| 0.3062 | 2.65 | 78000 | 0.5852 | 0.7964 |
| 0.3004 | 2.68 | 79000 | 0.5793 | 0.7966 |
| 0.3146 | 2.72 | 80000 | 0.5693 | 0.7985 |
| 0.3146 | 2.75 | 81000 | 0.5788 | 0.7982 |
| 0.3079 | 2.79 | 82000 | 0.5726 | 0.7978 |
| 0.3058 | 2.82 | 83000 | 0.5677 | 0.7988 |
| 0.3055 | 2.85 | 84000 | 0.5701 | 0.7982 |
| 0.3049 | 2.89 | 85000 | 0.5809 | 0.7970 |
| 0.3044 | 2.92 | 86000 | 0.5741 | 0.7986 |
| 0.3057 | 2.96 | 87000 | 0.5743 | 0.7980 |
| 0.3081 | 2.99 | 88000 | 0.5771 | 0.7978 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
fse/glove-twitter-25
|
fse
| 2021-12-02T13:39:31Z | 0 | 0 | null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- glove
- gensim
- fse
---
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
chandank/bart-base-finetuned-kaggglenews-batch8-epochs10
|
chandank
| 2021-12-02T12:42:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-finetuned-kaggglenews-batch8-epochs10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-batch8-epochs10
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5763
- Rouge1: 28.693
- Rouge2: 16.666
- Rougel: 24.2361
- Rougelsum: 26.0289
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.6043 | 27.8611 | 15.8713 | 23.8365 | 25.378 | 20.0 |
| 1.9054 | 2.0 | 990 | 1.5613 | 28.2715 | 16.3724 | 24.3212 | 25.8499 | 20.0 |
| 1.651 | 3.0 | 1485 | 1.5394 | 28.6282 | 16.2976 | 24.2336 | 25.9434 | 20.0 |
| 1.4955 | 4.0 | 1980 | 1.5438 | 28.9266 | 16.7257 | 24.61 | 26.443 | 20.0 |
| 1.4034 | 5.0 | 2475 | 1.5449 | 28.2296 | 16.1292 | 23.9698 | 25.651 | 20.0 |
| 1.3077 | 6.0 | 2970 | 1.5642 | 28.4486 | 16.3833 | 24.1629 | 26.0013 | 20.0 |
| 1.2505 | 7.0 | 3465 | 1.5566 | 28.5469 | 16.5374 | 24.2966 | 25.962 | 20.0 |
| 1.2027 | 8.0 | 3960 | 1.5730 | 28.7278 | 16.6442 | 24.2531 | 26.1171 | 20.0 |
| 1.1571 | 9.0 | 4455 | 1.5690 | 28.7736 | 16.7491 | 24.3066 | 26.1439 | 20.0 |
| 1.1237 | 10.0 | 4950 | 1.5763 | 28.693 | 16.666 | 24.2361 | 26.0289 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ai-forever/rudalle-Emojich
|
ai-forever
| 2021-12-02T11:06:48Z | 0 | 16 | null |
[
"pytorch",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# Emojich

### generate emojis from text
Model was trained by [Sber AI](https://github.com/sberbank-ai)
* Task: `text2image generation`
* Num Parameters: `1.3 B`
* Training Data Volume: `120 million text-image pairs` & [`2749 text-emoji pairs`](https://www.kaggle.com/shonenkov/russian-emoji)
[](https://telegram.me/addstickers/SberAI_ruDALLE)
### Model Description
😋 Emojich is a 1.3 billion params model from the family GPT3-like, it generates emoji-style images with the brain of ◾ Malevich.
### Fine-tuning stage:
The main goal of fine-tuning is trying to keep the generalization of [ruDALL-E Malevich (XL)](https://huggingface.co/sberbank-ai/rudalle-Malevich)
model on text to emoji tasks. ruDALL-E Malevich is a multi-modality big pretrained transformer, that uses images and texts.
The idea with freezing feedforward and self-attention layers in pretrained transformer is demonstrated high performance in changing different modalities.
Also, the model has a good chance for over-fitting text modality and lost generalization.
To deal with this problem is increased coefficient 10^3 in weighted cross-entropy loss for image codebooks part.
Full version of training code is available on Kaggle: [](https://www.kaggle.com/shonenkov/emojich-rudall-e)
### Examples of generated emojis
All examples are generated automatically (without manual cherry-picking) with hyper-parameters:
seed 42, batch size 16, top-k 2048, top-p 0.995, temperature 1.0, GPU A100.
For making better generative emojis should use more attempts (~512) and select the best one manually.
*Remember, the great art makers became "great" after creating just only one masterpiece.*

|
LzLzLz/Bert
|
LzLzLz
| 2021-12-02T06:50:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:04Z |
It's a sentiment inference model base on bert.
|
Akari/albert-base-v2-finetuned-squad
|
Akari
| 2021-12-02T05:36:13Z | 51 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: albert-base-v2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-squad
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8695 | 1.0 | 8248 | 0.8813 |
| 0.6333 | 2.0 | 16496 | 0.8042 |
| 0.4372 | 3.0 | 24744 | 0.9492 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.7.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
BigSalmon/FormalRobertaaa
|
BigSalmon
| 2021-12-02T00:23:58Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
https://huggingface.co/spaces/BigSalmon/MASK2
|
aretw0/t5-small-finetuned-en-to-ro-epoch.04375
|
aretw0
| 2021-12-01T21:21:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-epoch.04375
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.3292
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-epoch.04375
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4137
- Bleu: 7.3292
- Gen Len: 18.2541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.04375
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6211 | 0.04 | 1669 | 1.4137 | 7.3292 | 18.2541 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
emrecan/bert-base-multilingual-cased-snli_tr
|
emrecan
| 2021-12-01T19:43:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
emrecan/distilbert-base-turkish-cased-snli_tr
|
emrecan
| 2021-12-01T19:42:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
hankzhong/electra-small-discriminator-finetuned-squad
|
hankzhong
| 2021-12-01T19:04:28Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: electra-small-discriminator-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-discriminator-finetuned-squad
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5751 | 1.0 | 2767 | 1.3952 |
| 1.2939 | 2.0 | 5534 | 1.2458 |
| 1.1866 | 3.0 | 8301 | 1.2174 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/binance
|
huggingtweets
| 2021-12-01T14:02:42Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/binance/1638367358099/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1466001345324875784/4RrjsTR__400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Binance</div>
<div style="text-align: center; font-size: 14px;">@binance</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Binance.
| Data | Binance |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 268 |
| Short tweets | 353 |
| Tweets kept | 2629 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/m31ml960/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @binance's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2vx6m0ip) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2vx6m0ip/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/binance')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
rossanez/t5-small-finetuned-de-en-256
|
rossanez
| 2021-12-01T11:08:44Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt14",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
model-index:
- name: t5-small-finetuned-de-en-256
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-256
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.2663 | 4.5343 | 17.698 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Emmanuel/bert-finetuned-ner
|
Emmanuel
| 2021-12-01T11:05:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9317394888705688
- name: Recall
type: recall
value: 0.9510265903736116
- name: F1
type: f1
value: 0.9412842508536686
- name: Accuracy
type: accuracy
value: 0.9865779713898863
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0603
- Precision: 0.9317
- Recall: 0.9510
- F1: 0.9413
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0872 | 1.0 | 1756 | 0.0660 | 0.9152 | 0.9350 | 0.9250 | 0.9827 |
| 0.0386 | 2.0 | 3512 | 0.0579 | 0.9374 | 0.9498 | 0.9436 | 0.9864 |
| 0.0225 | 3.0 | 5268 | 0.0603 | 0.9317 | 0.9510 | 0.9413 | 0.9866 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-base-finetuned-de-en
|
rossanez
| 2021-12-01T10:55:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt14",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
model-index:
- name: t5-base-finetuned-de-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-de-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.4324 | 1.2308 | 17.8904 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ying-tina/wav2vec2-base-timit-demo-colab-32
|
ying-tina
| 2021-12-01T10:54:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: wav2vec2-base-timit-demo-colab-32
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab-32
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4488
- Wer: 0.3149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6155 | 4.0 | 500 | 2.2647 | 0.9992 |
| 0.9037 | 8.0 | 1000 | 0.4701 | 0.4336 |
| 0.3159 | 12.0 | 1500 | 0.4247 | 0.3575 |
| 0.1877 | 16.0 | 2000 | 0.4477 | 0.3442 |
| 0.1368 | 20.0 | 2500 | 0.4932 | 0.3384 |
| 0.1062 | 24.0 | 3000 | 0.4758 | 0.3202 |
| 0.0928 | 28.0 | 3500 | 0.4488 | 0.3149 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Tokenizers 0.10.3
|
emrecan/bert-base-turkish-cased-snli_tr
|
emrecan
| 2021-12-01T10:49:12Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
emrecan/bert-base-turkish-cased-multinli_tr
|
emrecan
| 2021-12-01T10:45:51Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
BSen/wav2vec2-large-xls-r-300m-turkish-colab
|
BSen
| 2021-12-01T10:18:53Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
glasses/vit_base_patch16_224
|
glasses
| 2021-12-01T08:23:58Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:2010.11929",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# vit_base_patch16_224
Implementation of Vision Transformer (ViT) proposed in [An Image Is
Worth 16x16 Words: Transformers For Image Recognition At
Scale](https://arxiv.org/pdf/2010.11929.pdf)
The following image from the authors shows the architecture.

``` python
ViT.vit_small_patch16_224()
ViT.vit_base_patch16_224()
ViT.vit_base_patch16_384()
ViT.vit_base_patch32_384()
ViT.vit_huge_patch16_224()
ViT.vit_huge_patch32_384()
ViT.vit_large_patch16_224()
ViT.vit_large_patch16_384()
ViT.vit_large_patch32_384()
```
Examples:
``` python
# change activation
ViT.vit_base_patch16_224(activation = nn.SELU)
# change number of classes (default is 1000 )
ViT.vit_base_patch16_224(n_classes=100)
# pass a different block, default is TransformerEncoderBlock
ViT.vit_base_patch16_224(block=MyCoolTransformerBlock)
# get features
model = ViT.vit_base_patch16_224
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[[torch.Size([1, 197, 768]), torch.Size([1, 197, 768]), ...]
# change the tokens, you have to subclass ViTTokens
class MyTokens(ViTTokens):
def __init__(self, emb_size: int):
super().__init__(emb_size)
self.my_new_token = nn.Parameter(torch.randn(1, 1, emb_size))
ViT(tokens=MyTokens)
```
|
glasses/efficientnet_b3
|
glasses
| 2021-12-01T08:08:37Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:1905.11946",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# efficientnet_b3
Implementation of EfficientNet proposed in [EfficientNet: Rethinking
Model Scaling for Convolutional Neural
Networks](https://arxiv.org/abs/1905.11946)

The basic architecture is similar to MobileNetV2 as was computed by
using [Progressive Neural Architecture
Search](https://arxiv.org/abs/1905.11946) .
The following table shows the basic architecture
(EfficientNet-efficientnet\_b0):

Then, the architecture is scaled up from
[-efficientnet\_b0]{.title-ref} to [-efficientnet\_b7]{.title-ref}
using compound scaling.

``` python
EfficientNet.efficientnet_b0()
EfficientNet.efficientnet_b1()
EfficientNet.efficientnet_b2()
EfficientNet.efficientnet_b3()
EfficientNet.efficientnet_b4()
EfficientNet.efficientnet_b5()
EfficientNet.efficientnet_b6()
EfficientNet.efficientnet_b7()
EfficientNet.efficientnet_b8()
EfficientNet.efficientnet_l2()
```
Examples:
``` python
EfficientNet.efficientnet_b0(activation = nn.SELU)
# change number of classes (default is 1000 )
EfficientNet.efficientnet_b0(n_classes=100)
# pass a different block
EfficientNet.efficientnet_b0(block=...)
# store each feature
x = torch.rand((1, 3, 224, 224))
model = EfficientNet.efficientnet_b0()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
# [torch.Size([1, 32, 112, 112]), torch.Size([1, 24, 56, 56]), torch.Size([1, 40, 28, 28]), torch.Size([1, 80, 14, 14])]
```
|
glasses/vgg19_bn
|
glasses
| 2021-12-01T08:06:23Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:1409.1556",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# vgg19_bn
Implementation of VGG proposed in [Very Deep Convolutional Networks For
Large-Scale Image Recognition](https://arxiv.org/pdf/1409.1556.pdf)
``` python
VGG.vgg11()
VGG.vgg13()
VGG.vgg16()
VGG.vgg19()
VGG.vgg11_bn()
VGG.vgg13_bn()
VGG.vgg16_bn()
VGG.vgg19_bn()
```
Please be aware that the [bn]{.title-ref} models uses BatchNorm but
they are very old and people back then don\'t know the bias is
superfluous in a conv followed by a batchnorm.
Examples:
``` python
# change activation
VGG.vgg11(activation = nn.SELU)
# change number of classes (default is 1000 )
VGG.vgg11(n_classes=100)
# pass a different block
from nn.models.classification.senet import SENetBasicBlock
VGG.vgg11(block=SENetBasicBlock)
# store the features tensor after every block
```
|
glasses/vgg11_bn
|
glasses
| 2021-12-01T07:58:18Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:1409.1556",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# vgg11_bn
Implementation of VGG proposed in [Very Deep Convolutional Networks For
Large-Scale Image Recognition](https://arxiv.org/pdf/1409.1556.pdf)
``` python
VGG.vgg11()
VGG.vgg13()
VGG.vgg16()
VGG.vgg19()
VGG.vgg11_bn()
VGG.vgg13_bn()
VGG.vgg16_bn()
VGG.vgg19_bn()
```
Please be aware that the [bn]{.title-ref} models uses BatchNorm but
they are very old and people back then don\'t know the bias is
superfluous in a conv followed by a batchnorm.
Examples:
``` python
# change activation
VGG.vgg11(activation = nn.SELU)
# change number of classes (default is 1000 )
VGG.vgg11(n_classes=100)
# pass a different block
from nn.models.classification.senet import SENetBasicBlock
VGG.vgg11(block=SENetBasicBlock)
# store the features tensor after every block
```
|
glasses/vgg11
|
glasses
| 2021-12-01T07:53:25Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:1409.1556",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# vgg11
Implementation of VGG proposed in [Very Deep Convolutional Networks For
Large-Scale Image Recognition](https://arxiv.org/pdf/1409.1556.pdf)
``` python
VGG.vgg11()
VGG.vgg13()
VGG.vgg16()
VGG.vgg19()
VGG.vgg11_bn()
VGG.vgg13_bn()
VGG.vgg16_bn()
VGG.vgg19_bn()
```
Please be aware that the [bn]{.title-ref} models uses BatchNorm but
they are very old and people back then don\'t know the bias is
superfluous in a conv followed by a batchnorm.
Examples:
``` python
# change activation
VGG.vgg11(activation = nn.SELU)
# change number of classes (default is 1000 )
VGG.vgg11(n_classes=100)
# pass a different block
from nn.models.classification.senet import SENetBasicBlock
VGG.vgg11(block=SENetBasicBlock)
# store the features tensor after every block
```
|
glasses/densenet161
|
glasses
| 2021-12-01T07:50:20Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:1608.06993",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# densenet161
Implementation of DenseNet proposed in [Densely Connected Convolutional
Networks](https://arxiv.org/abs/1608.06993)
Create a default models
``` {.sourceCode .}
DenseNet.densenet121()
DenseNet.densenet161()
DenseNet.densenet169()
DenseNet.densenet201()
```
Examples:
``` {.sourceCode .}
# change activation
DenseNet.densenet121(activation = nn.SELU)
# change number of classes (default is 1000 )
DenseNet.densenet121(n_classes=100)
# pass a different block
DenseNet.densenet121(block=...)
# change the initial convolution
model = DenseNet.densenet121()
model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3)
# store each feature
x = torch.rand((1, 3, 224, 224))
model = DenseNet.densenet121()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
# [torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14]), torch.Size([1, 512, 7, 7]), torch.Size([1, 1024, 7, 7])]
```
|
glasses/densenet201
|
glasses
| 2021-12-01T07:49:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:1608.06993",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# densenet201
Implementation of DenseNet proposed in [Densely Connected Convolutional
Networks](https://arxiv.org/abs/1608.06993)
Create a default models
``` {.sourceCode .}
DenseNet.densenet121()
DenseNet.densenet161()
DenseNet.densenet169()
DenseNet.densenet201()
```
Examples:
``` {.sourceCode .}
# change activation
DenseNet.densenet121(activation = nn.SELU)
# change number of classes (default is 1000 )
DenseNet.densenet121(n_classes=100)
# pass a different block
DenseNet.densenet121(block=...)
# change the initial convolution
model = DenseNet.densenet121()
model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3)
# store each feature
x = torch.rand((1, 3, 224, 224))
model = DenseNet.densenet121()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
# [torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14]), torch.Size([1, 512, 7, 7]), torch.Size([1, 1024, 7, 7])]
```
|
glasses/densenet169
|
glasses
| 2021-12-01T07:48:55Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:1608.06993",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# densenet169
Implementation of DenseNet proposed in [Densely Connected Convolutional
Networks](https://arxiv.org/abs/1608.06993)
Create a default models
``` {.sourceCode .}
DenseNet.densenet121()
DenseNet.densenet161()
DenseNet.densenet169()
DenseNet.densenet201()
```
Examples:
``` {.sourceCode .}
# change activation
DenseNet.densenet121(activation = nn.SELU)
# change number of classes (default is 1000 )
DenseNet.densenet121(n_classes=100)
# pass a different block
DenseNet.densenet121(block=...)
# change the initial convolution
model = DenseNet.densenet121()
model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3)
# store each feature
x = torch.rand((1, 3, 224, 224))
model = DenseNet.densenet121()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
# [torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14]), torch.Size([1, 512, 7, 7]), torch.Size([1, 1024, 7, 7])]
```
|
glasses/regnety_008
|
glasses
| 2021-12-01T07:46:29Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:2003.13678",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# regnety_008
Implementation of RegNet proposed in [Designing Network Design
Spaces](https://arxiv.org/abs/2003.13678)
The main idea is to start with a high dimensional search space and
iteratively reduce the search space by empirically apply constrains
based on the best performing models sampled by the current search
space.
The resulting models are light, accurate, and faster than
EfficientNets (up to 5x times!)
For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the
bottleneck ratio $b_i$ for all stage $i$. The following table shows
all the restrictions applied from one search space to the next one.

The paper is really well written and very interesting, I highly
recommended read it.
``` python
ResNet.regnetx_002()
ResNet.regnetx_004()
ResNet.regnetx_006()
ResNet.regnetx_008()
ResNet.regnetx_016()
ResNet.regnetx_040()
ResNet.regnetx_064()
ResNet.regnetx_080()
ResNet.regnetx_120()
ResNet.regnetx_160()
ResNet.regnetx_320()
# Y variants (with SE)
ResNet.regnety_002()
# ...
ResNet.regnetx_320()
You can easily customize your model
```
Examples:
``` python
# change activation
RegNet.regnetx_004(activation = nn.SELU)
# change number of classes (default is 1000 )
RegNet.regnetx_004(n_classes=100)
# pass a different block
RegNet.regnetx_004(block=RegNetYBotteneckBlock)
# change the steam
model = RegNet.regnetx_004(stem=ResNetStemC)
change shortcut
model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = RegNet.regnetx_004()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])]
```
|
glasses/regnety_006
|
glasses
| 2021-12-01T07:46:05Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:2003.13678",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# regnety_006
Implementation of RegNet proposed in [Designing Network Design
Spaces](https://arxiv.org/abs/2003.13678)
The main idea is to start with a high dimensional search space and
iteratively reduce the search space by empirically apply constrains
based on the best performing models sampled by the current search
space.
The resulting models are light, accurate, and faster than
EfficientNets (up to 5x times!)
For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the
bottleneck ratio $b_i$ for all stage $i$. The following table shows
all the restrictions applied from one search space to the next one.

The paper is really well written and very interesting, I highly
recommended read it.
``` python
ResNet.regnetx_002()
ResNet.regnetx_004()
ResNet.regnetx_006()
ResNet.regnetx_008()
ResNet.regnetx_016()
ResNet.regnetx_040()
ResNet.regnetx_064()
ResNet.regnetx_080()
ResNet.regnetx_120()
ResNet.regnetx_160()
ResNet.regnetx_320()
# Y variants (with SE)
ResNet.regnety_002()
# ...
ResNet.regnetx_320()
You can easily customize your model
```
Examples:
``` python
# change activation
RegNet.regnetx_004(activation = nn.SELU)
# change number of classes (default is 1000 )
RegNet.regnetx_004(n_classes=100)
# pass a different block
RegNet.regnetx_004(block=RegNetYBotteneckBlock)
# change the steam
model = RegNet.regnetx_004(stem=ResNetStemC)
change shortcut
model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = RegNet.regnetx_004()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])]
```
|
mofawzy/argpt2-goodreads
|
mofawzy
| 2021-12-01T06:55:41Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"ar",
"dataset:LABR",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
language: ar
datasets:
- LABR
widget:
- text: "كان الكاتب ممكن"
- text: "كتاب ممتاز ولكن"
- text: "رواية درامية جدا والافكار بسيطة"
model-index:
- name: argpt2-goodreads
results: []
---
# argpt2-goodreads
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an goodreads LABR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4389
## Model description
Generate sentences either positive/negative examples based on goodreads corpus in arabic language.
## Intended uses & limitations
the model fine-tuned on arabic language only with aspect to generate sentences such as reviews in order todo the same for other languages you need to fine-tune it in your own.
any harmful content generated by GPT2 should not be used in anywhere.
## Training and evaluation data
training and validation done on goodreads dataset LABR 80% for trainng and 20% for testing
## Usage
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("mofawzy/argpt2-goodreads")
model = AutoModelForCausalLM.from_pretrained("mofawzy/argpt2-goodreads")
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: tpu
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
- train_loss = 1.474
### Evaluation results
- eval_loss = 1.4389
### train metrics
- epoch = 20.0
- train_loss = 1.474
- train_runtime = 2:18:14.51
- train_samples = 108110
- train_samples_per_second = 260.678
- train_steps_per_second = 2.037
### eval metrics
- epoch = 20.0
- eval_loss = 1.4389
- eval_runtime = 0:04:37.01
- eval_samples = 27329
- eval_samples_per_second = 98.655
- eval_steps_per_second = 0.773
- perplexity = 4.2162
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
MMG/bert-base-spanish-wwm-cased-finetuned-sqac
|
MMG
| 2021-12-01T06:13:29Z | 34 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"es",
"dataset:sqac",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- sqac
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-sqac
results: []
language:
- es
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-sqac
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the sqac dataset.
It achieves the following results on the evaluation set:
{'exact_match': 62.017167, 'f1': 79.452767}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1335 | 1.0 | 1230 | 0.9346 |
| 0.6794 | 2.0 | 2460 | 0.8634 |
| 0.3992 | 3.0 | 3690 | 0.9662 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ykliu1892/translation-en-pt-t5-finetuned-Duolingo
|
ykliu1892
| 2021-12-01T04:58:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: translation-en-pt-t5-finetuned-Duolingo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation-en-pt-t5-finetuned-Duolingo
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7362
- Bleu: 39.4725
- Gen Len: 9.002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.5429 | 0.24 | 9000 | 0.7461 | 39.4744 | 9.0 |
| 0.5302 | 0.48 | 18000 | 0.7431 | 39.7559 | 8.97 |
| 0.5309 | 0.72 | 27000 | 0.7388 | 39.6751 | 8.998 |
| 0.5336 | 0.96 | 36000 | 0.7362 | 39.4725 | 9.002 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-256-nofp16
|
rossanez
| 2021-12-01T00:54:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt14",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
model-index:
- name: t5-small-finetuned-de-en-256-nofp16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-256-nofp16
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.1234 | 7.7305 | 17.4033 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-256-wd-01
|
rossanez
| 2021-12-01T00:48:47Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt14",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
model-index:
- name: t5-small-finetuned-de-en-256-wd-01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-256-wd-01
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.1202 | 7.5964 | 17.3996 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
kaporter/bert-base-uncased-finetuned-squad
|
kaporter
| 2021-11-30T22:42:17Z | 267 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model_index:
- name: bert-base-uncased-finetuned-squad
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: squad
type: squad
args: plain_text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0749 | 1.0 | 5533 | 1.0167 |
| 0.7851 | 2.0 | 11066 | 1.0299 |
| 0.6067 | 3.0 | 16599 | 1.0725 |
### Framework versions
- Transformers 4.8.1
- Pytorch 1.8.1
- Datasets 1.16.1
- Tokenizers 0.10.1
|
mmcquade11-test/reuters-summarization
|
mmcquade11-test
| 2021-11-30T21:43:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autonlp",
"en",
"dataset:mmcquade11/autonlp-data-reuters-summarization",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- mmcquade11/autonlp-data-reuters-summarization
co2_eq_emissions: 286.4350821612984
---
This is an autoNLP model I trained on Reuters dataset
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 34018133
- CO2 Emissions (in grams): 286.4350821612984
## Validation Metrics
- Loss: 1.1805976629257202
- Rouge1: 55.4013
- Rouge2: 30.8004
- RougeL: 52.57
- RougeLsum: 52.6103
- Gen Len: 15.3458
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/mmcquade11/autonlp-reuters-summarization-34018133
```
|
nouamanetazi/cover-letter-t5-base
|
nouamanetazi
| 2021-11-30T21:14:47Z | 7 | 4 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"t5-base",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
license: apache-2.0
tags:
- generated_from_trainer
- t5-base
model-index:
- name: cover-letter-t5-base
results: []
widget:
- text: "coverletter name: Nouamane Tazi job: Machine Learning Engineer at HuggingFace background: Master's student in AI at the University of Paris Saclay experiences: I participated in the Digital Tech Year program, developing three minimal valuable products for three companies in a 7-week constraint. I also spent 1 year as a machine learning engineer for Flashbrand where I mainly worked on their chatbot . And I recently completed the HuggingFace course, where I built an amazing huggingface space. I am a strong team player."
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cover-letter-t5-base
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on cover letter samples scraped from Indeed and JobHero.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
glasses/regnetx_016
|
glasses
| 2021-11-30T20:26:57Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:2003.13678",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# regnetx_016
Implementation of RegNet proposed in [Designing Network Design
Spaces](https://arxiv.org/abs/2003.13678)
The main idea is to start with a high dimensional search space and
iteratively reduce the search space by empirically apply constrains
based on the best performing models sampled by the current search
space.
The resulting models are light, accurate, and faster than
EfficientNets (up to 5x times!)
For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the
bottleneck ratio $b_i$ for all stage $i$. The following table shows
all the restrictions applied from one search space to the next one.

The paper is really well written and very interesting, I highly
recommended read it.
``` python
ResNet.regnetx_002()
ResNet.regnetx_004()
ResNet.regnetx_006()
ResNet.regnetx_008()
ResNet.regnetx_016()
ResNet.regnetx_040()
ResNet.regnetx_064()
ResNet.regnetx_080()
ResNet.regnetx_120()
ResNet.regnetx_160()
ResNet.regnetx_320()
# Y variants (with SE)
ResNet.regnety_002()
# ...
ResNet.regnetx_320()
You can easily customize your model
```
Examples:
``` python
# change activation
RegNet.regnetx_004(activation = nn.SELU)
# change number of classes (default is 1000 )
RegNet.regnetx_004(n_classes=100)
# pass a different block
RegNet.regnetx_004(block=RegNetYBotteneckBlock)
# change the steam
model = RegNet.regnetx_004(stem=ResNetStemC)
change shortcut
model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = RegNet.regnetx_004()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])]
```
|
glasses/regnetx_002
|
glasses
| 2021-11-30T20:25:54Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:2003.13678",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# regnetx_002
Implementation of RegNet proposed in [Designing Network Design
Spaces](https://arxiv.org/abs/2003.13678)
The main idea is to start with a high dimensional search space and
iteratively reduce the search space by empirically apply constrains
based on the best performing models sampled by the current search
space.
The resulting models are light, accurate, and faster than
EfficientNets (up to 5x times!)
For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the
bottleneck ratio $b_i$ for all stage $i$. The following table shows
all the restrictions applied from one search space to the next one.

The paper is really well written and very interesting, I highly
recommended read it.
``` python
ResNet.regnetx_002()
ResNet.regnetx_004()
ResNet.regnetx_006()
ResNet.regnetx_008()
ResNet.regnetx_016()
ResNet.regnetx_040()
ResNet.regnetx_064()
ResNet.regnetx_080()
ResNet.regnetx_120()
ResNet.regnetx_160()
ResNet.regnetx_320()
# Y variants (with SE)
ResNet.regnety_002()
# ...
ResNet.regnetx_320()
You can easily customize your model
```
Examples:
``` python
# change activation
RegNet.regnetx_004(activation = nn.SELU)
# change number of classes (default is 1000 )
RegNet.regnetx_004(n_classes=100)
# pass a different block
RegNet.regnetx_004(block=RegNetYBotteneckBlock)
# change the steam
model = RegNet.regnetx_004(stem=ResNetStemC)
change shortcut
model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = RegNet.regnetx_004()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])]
```
|
glasses/wide_resnet101_2
|
glasses
| 2021-11-30T20:20:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:1605.07146",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# wide_resnet101_2
Implementation of Wide ResNet proposed in [\"Wide Residual
Networks\"](https://arxiv.org/pdf/1605.07146.pdf)
Create a default model
``` python
WideResNet.wide_resnet50_2()
WideResNet.wide_resnet101_2()
# create a wide_resnet18_4
WideResNet.resnet18(block=WideResNetBottleNeckBlock, width_factor=4)
```
Examples:
``` python
# change activation
WideResNet.resnext50_32x4d(activation = nn.SELU)
# change number of classes (default is 1000 )
WideResNet.resnext50_32x4d(n_classes=100)
# pass a different block
WideResNet.resnext50_32x4d(block=SENetBasicBlock)
# change the initial convolution
model = WideResNet.resnext50_32x4d
model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3)
# store each feature
x = torch.rand((1, 3, 224, 224))
model = WideResNet.wide_resnet50_2()
features = []
x = model.encoder.gate(x)
for block in model.encoder.layers:
x = block(x)
features.append(x)
print([x.shape for x in features])
# [torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14]), torch.Size([1, 512, 7, 7])]
```
|
glasses/resnext50_32x4d
|
glasses
| 2021-11-30T20:13:20Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:1611.05431",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# resnext50_32x4d
Implementation of ResNetXt proposed in [\"Aggregated Residual
Transformation for Deep Neural
Networks\"](https://arxiv.org/pdf/1611.05431.pdf)
Create a default model
``` python
ResNetXt.resnext50_32x4d()
ResNetXt.resnext101_32x8d()
# create a resnetxt18_32x4d
ResNetXt.resnet18(block=ResNetXtBottleNeckBlock, groups=32, base_width=4)
```
Examples:
: ``` python
# change activation
ResNetXt.resnext50_32x4d(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNetXt.resnext50_32x4d(n_classes=100)
# pass a different block
ResNetXt.resnext50_32x4d(block=SENetBasicBlock)
# change the initial convolution
model = ResNetXt.resnext50_32x4d
model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3)
# store each feature
x = torch.rand((1, 3, 224, 224))
model = ResNetXt.resnext50_32x4d()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
glasses/resnet152
|
glasses
| 2021-11-30T20:12:19Z | 30 | 0 |
transformers
|
[
"transformers",
"pytorch",
"image-classification",
"dataset:imagenet",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# resnet152
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
glasses/resnet50d
|
glasses
| 2021-11-30T20:10:20Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"image-classification",
"dataset:imagenet",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# resnet50d
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
glasses/resnet50
|
glasses
| 2021-11-30T20:09:35Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"image-classification",
"dataset:imagenet",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# resnet50
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
glasses/resnet34
|
glasses
| 2021-11-30T20:08:12Z | 33 | 0 |
transformers
|
[
"transformers",
"pytorch",
"image-classification",
"dataset:imagenet",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# resnet34
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
glasses/resnet26
|
glasses
| 2021-11-30T20:06:59Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"image-classification",
"dataset:imagenet",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# resnet26
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
glasses/resnet18
|
glasses
| 2021-11-30T20:06:28Z | 37 | 0 |
transformers
|
[
"transformers",
"pytorch",
"image-classification",
"dataset:imagenet",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# resnet18
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
NDugar/1epochv3
|
NDugar
| 2021-11-30T20:05:36Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"deberta-v3",
"deberta-v2`",
"deberta-mnli",
"zero-shot-classification",
"en",
"arxiv:2006.03654",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:04Z |
---
language: en
tags:
- deberta-v3
- deberta-v2`
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
Run with `Deepspeed`,
```bash
pip install datasets
pip install deepspeed
# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=mnli
output_dir="ds_results"
num_gpus=8
batch_size=8
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
run_glue.py \\
--model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 256 \\
--per_device_train_batch_size ${batch_size} \\
--learning_rate 3e-6 \\
--num_train_epochs 3 \\
--output_dir $output_dir \\
--overwrite_output_dir \\
--logging_steps 10 \\
--logging_dir $output_dir \\
--deepspeed ds_config.json
```
You can also run with `--sharded_ddp`
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mnli
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
tyoyo/t5-base-TEDxJP-1body-10context
|
tyoyo
| 2021-11-30T19:40:13Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:te_dx_jp",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- te_dx_jp
model-index:
- name: t5-base-TEDxJP-1body-10context
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-TEDxJP-1body-10context
This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3833
- Wer: 0.1983
- Mer: 0.1900
- Wil: 0.2778
- Wip: 0.7222
- Hits: 56229
- Substitutions: 6686
- Deletions: 3593
- Insertions: 2909
- Cer: 0.1823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:|
| 0.5641 | 1.0 | 746 | 0.4426 | 0.2336 | 0.2212 | 0.3143 | 0.6857 | 54711 | 7183 | 4614 | 3742 | 0.2238 |
| 0.4867 | 2.0 | 1492 | 0.4017 | 0.2045 | 0.1972 | 0.2863 | 0.7137 | 55378 | 6764 | 4366 | 2470 | 0.1853 |
| 0.4257 | 3.0 | 2238 | 0.3831 | 0.2008 | 0.1933 | 0.2826 | 0.7174 | 55715 | 6788 | 4005 | 2560 | 0.1784 |
| 0.4038 | 4.0 | 2984 | 0.3797 | 0.1963 | 0.1890 | 0.2776 | 0.7224 | 56028 | 6731 | 3749 | 2578 | 0.1748 |
| 0.3817 | 5.0 | 3730 | 0.3769 | 0.1944 | 0.1877 | 0.2758 | 0.7242 | 55926 | 6663 | 3919 | 2345 | 0.1730 |
| 0.3467 | 6.0 | 4476 | 0.3806 | 0.2111 | 0.2002 | 0.2876 | 0.7124 | 56082 | 6688 | 3738 | 3616 | 0.1916 |
| 0.3361 | 7.0 | 5222 | 0.3797 | 0.1977 | 0.1897 | 0.2780 | 0.7220 | 56173 | 6721 | 3614 | 2816 | 0.1785 |
| 0.3107 | 8.0 | 5968 | 0.3814 | 0.1993 | 0.1910 | 0.2792 | 0.7208 | 56167 | 6720 | 3621 | 2916 | 0.1839 |
| 0.3141 | 9.0 | 6714 | 0.3820 | 0.1991 | 0.1907 | 0.2787 | 0.7213 | 56201 | 6709 | 3598 | 2933 | 0.1859 |
| 0.3122 | 10.0 | 7460 | 0.3833 | 0.1983 | 0.1900 | 0.2778 | 0.7222 | 56229 | 6686 | 3593 | 2909 | 0.1823 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ffsouza/tiny-mbart-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
|
ffsouza
| 2021-11-30T17:39:53Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16_en_ro_pre_processed",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- wmt16_en_ro_pre_processed
metrics:
- bleu
model-index:
- name: tiny-mbart-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16_en_ro_pre_processed
type: wmt16_en_ro_pre_processed
args: enro
metrics:
- name: Bleu
type: bleu
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mbart-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
This model is a fine-tuned version of [sshleifer/tiny-mbart](https://huggingface.co/sshleifer/tiny-mbart) on the wmt16_en_ro_pre_processed dataset.
It achieves the following results on the evaluation set:
- Loss: 8.5137
- Bleu: 0.0
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:----:|:-------:|
| 8.2817 | 1.0 | 76290 | 8.5137 | 0.0 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
NDugar/3epoch-3large
|
NDugar
| 2021-11-30T17:34:56Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"deberta-v3",
"deberta-v2`",
"deberta-mnli",
"zero-shot-classification",
"en",
"arxiv:2006.03654",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:04Z |
---
language: en
tags:
- deberta-v3
- deberta-v2`
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
Run with `Deepspeed`,
```bash
pip install datasets
pip install deepspeed
# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=mnli
output_dir="ds_results"
num_gpus=8
batch_size=8
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
run_glue.py \\
--model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 256 \\
--per_device_train_batch_size ${batch_size} \\
--learning_rate 3e-6 \\
--num_train_epochs 3 \\
--output_dir $output_dir \\
--overwrite_output_dir \\
--logging_steps 10 \\
--logging_dir $output_dir \\
--deepspeed ds_config.json
```
You can also run with `--sharded_ddp`
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mnli
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
Raphaelg9/distilbert-base-uncased-finetuned-squad
|
Raphaelg9
| 2021-11-30T17:30:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8535 | 1.0 | 661 | 2.0684 |
| 1.5385 | 2.0 | 1322 | 2.0954 |
| 1.2312 | 3.0 | 1983 | 2.1323 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tiagohatta/opus-mt-de-en-finetuned-de-to-en-second
|
tiagohatta
| 2021-11-30T17:23:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-de-en-finetuned-de-to-en-second
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: de-en
metrics:
- name: Bleu
type: bleu
value: 38.959
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-de-en-finetuned-de-to-en-second
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-de-en](https://huggingface.co/Helsinki-NLP/opus-mt-de-en) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1719
- Bleu: 38.959
- Gen Len: 25.2812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 157 | 1.1492 | 39.2552 | 25.2268 |
| No log | 2.0 | 314 | 1.1601 | 38.8343 | 25.2288 |
| No log | 3.0 | 471 | 1.1651 | 39.0092 | 25.254 |
| 1.8512 | 4.0 | 628 | 1.1704 | 38.9281 | 25.2756 |
| 1.8512 | 5.0 | 785 | 1.1719 | 38.959 | 25.2812 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
beatrice-portelli/DiLBERT
|
beatrice-portelli
| 2021-11-30T16:00:18Z | 7,455 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"fill-mask",
"medical",
"disease",
"classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- medical
- disease
- classification
---
# DiLBERT (Disease Language BERT)
The objective of this model was to obtain a specialized disease-related language, trained **from scratch**. <br>
We created a pre-training corpora starting from **ICD-11** entities, and enriched it with documents from **PubMed** and **Wikipedia** related to the same entities. <br>
Results of finetuning show that DiLBERT leads to comparable or higher accuracy scores on various classification tasks compared with other general-purpose or in-domain models (e.g., BioClinicalBERT, RoBERTa, XLNet).
Model released with the paper "**DiLBERT: Cheap Embeddings for Disease Related Medical NLP**". <br>
To summarize the practical implications of our work: we pre-trained and fine-tuned a domain specific BERT model on a small corpora, with comparable or better performance than state-of-the-art models.
This approach may also simplify the development of models for languages different from English, due to the minor quantity of data needed for training.
### Composition of the pretraining corpus
| Source | Documents | Words |
|---|---:|---:|
| ICD-11 descriptions | 34,676 | 1.0 million |
| PubMed Title and Abstracts | 852,550 | 184.6 million |
| Wikipedia pages | 37,074 | 6.1 million |
### Main repository
For more details check the main repo https://github.com/KevinRoitero/dilbert
# Usage
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("beatrice-portelli/DiLBERT")
model = AutoModelForMaskedLM.from_pretrained("beatrice-portelli/DiLBERT")
```
# How to cite
```
@article{roitero2021dilbert,
title={{DilBERT}: Cheap Embeddings for Disease Related Medical NLP},
author={Roitero, Kevin and Portelli, Beatrice and Popescu, Mihai Horia and Della Mea, Vincenzo},
journal={IEEE Access},
volume={},
pages={},
year={2021},
publisher={IEEE},
note = {In Press}
}
```
|
huggingtweets/hel_ql-shahdashrf_-sinnerslayerr-witheredstrings
|
huggingtweets
| 2021-11-30T15:40:26Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/hel_ql-shahdashrf_-sinnerslayerr-witheredstrings/1638286821619/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1449201367080386564/GllCx8JB_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1461790972392656898/e1248oRI_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1457045233783701504/fnjAg6lH_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Sinner & Hσɳҽყ & Anthropos & VacuumF</div>
<div style="text-align: center; font-size: 14px;">@hel_ql-shahdashrf_-sinnerslayerr-witheredstrings</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Sinner & Hσɳҽყ & Anthropos & VacuumF.
| Data | Sinner | Hσɳҽყ | Anthropos | VacuumF |
| --- | --- | --- | --- | --- |
| Tweets downloaded | 403 | 3240 | 1088 | 379 |
| Retweets | 296 | 135 | 376 | 1 |
| Short tweets | 3 | 734 | 77 | 12 |
| Tweets kept | 104 | 2371 | 635 | 366 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2fhsvt3r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hel_ql-shahdashrf_-sinnerslayerr-witheredstrings's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2kjvpfsa) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2kjvpfsa/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hel_ql-shahdashrf_-sinnerslayerr-witheredstrings')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tyoyo/t5-base-TEDxJP-1body-5context
|
tyoyo
| 2021-11-30T13:49:54Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
Epoch Training Loss Validation Loss Wer Mer Wil Wip Hits Substitutions Deletions Insertions Cer
1 0.572400 0.447836 0.262284 0.241764 0.333088 0.666912 54709 7126 4673 5645 0.242417
2 0.492700 0.400297 0.203600 0.196446 0.285798 0.714202 55389 6777 4342 2422 0.183740
3 0.429200 0.385705 0.201179 0.193641 0.282458 0.717542 55717 6745 4046 2589 0.179833
4 0.408700 0.383085 0.198277 0.190817 0.280919 0.719081 55921 6867 3720 2600 0.177468
5 0.386100 0.381157 0.192488 0.186279 0.274890 0.725110 55923 6709 3876 2217 0.171644
6 0.353400 0.380517 0.193315 0.186615 0.275510 0.724490 56039 6747 3722 2388 0.170799
7 0.346100 0.379445 0.194713 0.187616 0.276780 0.723220 56074 6780 3654 2516 0.171347
8 0.314700 0.383521 0.196022 0.188486 0.277974 0.722026 56130 6820 3558 2659 0.179184
|
abhishek/autonlp-bbc-roberta-37249301
|
abhishek
| 2021-11-30T13:35:38Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"unk",
"dataset:abhishek/autonlp-data-bbc-roberta",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- abhishek/autonlp-data-bbc-roberta
co2_eq_emissions: 1.9859980179658823
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 37249301
- CO2 Emissions (in grams): 1.9859980179658823
## Validation Metrics
- Loss: 0.06406362354755402
- Accuracy: 0.9833887043189369
- Macro F1: 0.9832763664701248
- Micro F1: 0.9833887043189369
- Weighted F1: 0.9833288528828136
- Macro Precision: 0.9847257743677181
- Micro Precision: 0.9833887043189369
- Weighted Precision: 0.9835392869652073
- Macro Recall: 0.982101705176067
- Micro Recall: 0.9833887043189369
- Weighted Recall: 0.9833887043189369
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-bbc-roberta-37249301
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-bbc-roberta-37249301", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-bbc-roberta-37249301", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
DATEXIS/CORe-clinical-mortality-prediction
|
DATEXIS
| 2021-11-30T13:28:29Z | 29 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"medical",
"clinical",
"mortality",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: "en"
tags:
- bert
- medical
- clinical
- mortality
thumbnail: "https://core.app.datexis.com/static/paper.png"
---
# CORe Model - Clinical Mortality Risk Prediction
## Model description
The CORe (_Clinical Outcome Representations_) model is introduced in the paper [Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration](https://www.aclweb.org/anthology/2021.eacl-main.75.pdf).
It is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective.
This model checkpoint is **fine-tuned on the task of mortality risk prediction**.
The model expects patient admission notes as input and outputs the predicted risk of in-hospital mortality.
#### How to use CORe Mortality Risk Prediction
You can load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("bvanaken/CORe-clinical-mortality-prediction")
model = AutoModelForSequenceClassification.from_pretrained("bvanaken/CORe-clinical-mortality-prediction")
```
The following code shows an inference example:
```
input = "CHIEF COMPLAINT: Headaches\n\nPRESENT ILLNESS: 58yo man w/ hx of hypertension, AFib on coumadin presented to ED with the worst headache of his life."
tokenized_input = tokenizer(input, return_tensors="pt")
output = model(**tokenized_input)
import torch
predictions = torch.softmax(output.logits.detach(), dim=1)
mortality_risk_prediction = predictions[0][1].item()
```
### More Information
For all the details about CORe and contact info, please visit [CORe.app.datexis.com](http://core.app.datexis.com/).
### Cite
```bibtex
@inproceedings{vanaken21,
author = {Betty van Aken and
Jens-Michalis Papaioannou and
Manuel Mayrdorfer and
Klemens Budde and
Felix A. Gers and
Alexander Löser},
title = {Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration},
booktitle = {Proceedings of the 16th Conference of the European Chapter of the
Association for Computational Linguistics: Main Volume, {EACL} 2021,
Online, April 19 - 23, 2021},
publisher = {Association for Computational Linguistics},
year = {2021},
}
```
|
mimi/wynehills-mimi-ASR
|
mimi
| 2021-11-30T11:45:21Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
name: wynehills-mimi-ASR
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wynehills-mimi-ASR
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3822
- Wer: 0.6309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 70
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.54 | 20 | 1.4018 | 0.6435 |
| No log | 3.08 | 40 | 1.4704 | 0.6593 |
| No log | 4.62 | 60 | 1.4898 | 0.6625 |
| No log | 6.15 | 80 | 1.4560 | 0.6404 |
| No log | 7.69 | 100 | 1.3822 | 0.6309 |
| No log | 9.23 | 120 | 1.3822 | 0.6309 |
| No log | 10.77 | 140 | 1.3822 | 0.6309 |
| No log | 12.31 | 160 | 1.3822 | 0.6309 |
| No log | 13.85 | 180 | 1.3822 | 0.6309 |
| No log | 15.38 | 200 | 1.3822 | 0.6309 |
| No log | 16.92 | 220 | 1.3822 | 0.6309 |
| No log | 18.46 | 240 | 1.3822 | 0.6309 |
| No log | 20.0 | 260 | 1.3822 | 0.6309 |
| No log | 21.54 | 280 | 1.3822 | 0.6309 |
| No log | 23.08 | 300 | 1.3822 | 0.6309 |
| No log | 24.62 | 320 | 1.3822 | 0.6309 |
| No log | 26.15 | 340 | 1.3822 | 0.6309 |
| No log | 27.69 | 360 | 1.3822 | 0.6309 |
| No log | 29.23 | 380 | 1.3822 | 0.6309 |
| No log | 30.77 | 400 | 1.3822 | 0.6309 |
| No log | 32.31 | 420 | 1.3822 | 0.6309 |
| No log | 33.85 | 440 | 1.3822 | 0.6309 |
| No log | 35.38 | 460 | 1.3822 | 0.6309 |
| No log | 36.92 | 480 | 1.3822 | 0.6309 |
| 0.0918 | 38.46 | 500 | 1.3822 | 0.6309 |
| 0.0918 | 40.0 | 520 | 1.3822 | 0.6309 |
| 0.0918 | 41.54 | 540 | 1.3822 | 0.6309 |
| 0.0918 | 43.08 | 560 | 1.3822 | 0.6309 |
| 0.0918 | 44.62 | 580 | 1.3822 | 0.6309 |
| 0.0918 | 46.15 | 600 | 1.3822 | 0.6309 |
| 0.0918 | 47.69 | 620 | 1.3822 | 0.6309 |
| 0.0918 | 49.23 | 640 | 1.3822 | 0.6309 |
| 0.0918 | 50.77 | 660 | 1.3822 | 0.6309 |
| 0.0918 | 52.31 | 680 | 1.3822 | 0.6309 |
| 0.0918 | 53.85 | 700 | 1.3822 | 0.6309 |
| 0.0918 | 55.38 | 720 | 1.3822 | 0.6309 |
| 0.0918 | 56.92 | 740 | 1.3822 | 0.6309 |
| 0.0918 | 58.46 | 760 | 1.3822 | 0.6309 |
| 0.0918 | 60.0 | 780 | 1.3822 | 0.6309 |
| 0.0918 | 61.54 | 800 | 1.3822 | 0.6309 |
| 0.0918 | 63.08 | 820 | 1.3822 | 0.6309 |
| 0.0918 | 64.62 | 840 | 1.3822 | 0.6309 |
| 0.0918 | 66.15 | 860 | 1.3822 | 0.6309 |
| 0.0918 | 67.69 | 880 | 1.3822 | 0.6309 |
| 0.0918 | 69.23 | 900 | 1.3822 | 0.6309 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
ying-tina/wav2vec2-base-timit-demo-colab
|
ying-tina
| 2021-11-30T10:52:25Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5127
- Wer: 0.3082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7645 | 2.01 | 500 | 2.5179 | 0.9999 |
| 1.1873 | 4.02 | 1000 | 0.5464 | 0.4798 |
| 0.46 | 6.02 | 1500 | 0.4625 | 0.4025 |
| 0.2869 | 8.03 | 2000 | 0.4252 | 0.3650 |
| 0.2213 | 10.04 | 2500 | 0.4340 | 0.3585 |
| 0.1905 | 12.05 | 3000 | 0.4310 | 0.3404 |
| 0.1545 | 14.06 | 3500 | 0.4547 | 0.3381 |
| 0.1206 | 16.06 | 4000 | 0.4902 | 0.3384 |
| 0.1116 | 18.07 | 4500 | 0.4767 | 0.3253 |
| 0.0925 | 20.08 | 5000 | 0.5248 | 0.3160 |
| 0.0897 | 22.09 | 5500 | 0.4960 | 0.3126 |
| 0.0687 | 24.1 | 6000 | 0.4876 | 0.3086 |
| 0.063 | 26.1 | 6500 | 0.4895 | 0.3065 |
| 0.0558 | 28.11 | 7000 | 0.5127 | 0.3082 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
mustapha/distilgpt2-finetuned-wikitext2
|
mustapha
| 2021-11-30T09:52:12Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7608 | 1.0 | 2334 | 3.6655 |
| 3.6335 | 2.0 | 4668 | 3.6455 |
| 3.6066 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
raynardj/xlsearch-cross-lang-search-zh-vs-classicical-cn
|
raynardj
| 2021-11-30T01:06:55Z | 14 | 5 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"search",
"zh",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- zh
tags:
- search
---
# Cross Language Search
## Search cliassical CN with modern ZH
* In some cases, Classical Chinese feels like another language, I even trained 2 translation models ([1](https://huggingface.co/raynardj/wenyanwen-chinese-translate-to-ancient) and [2](https://huggingface.co/raynardj/wenyanwen-ancient-translate-to-modern)) to prove this point.
* That's why, when people wants to be savvy about their words, we choose to quote our ancestors. It's exactly like westerners like to quote Latin or Shakespeare, the difference is we have a much bigger pool to choose.
* This model helps you **find** text within **ancient Chinese** literature, but you can **search with modern Chinese**
# 跨语种搜索
## 博古搜今
* 我不记得是谁, 哪个朝代,我只记得大概这么一个事儿,我就能模糊找到原文
* 我不记得原文, 但是我只记得原文想表达的现代汉语意思, 希望能找出来引用一下。
* 我在写文章, 有个观点, 我想碰运气看看古人有没有提过同样类似的说法。
* 我只是想更有效率地阅读古文
推荐的使用通道如下,当然, cosine距离搜索相关的框架和引擎很多, 大家自己看着适用的选
装包
```shell
pip install -Uqq unpackai
pip install -Uqq SentenceTransformer
```
搜索语句的函数
```python
from unpackai.interp import CosineSearch
from sentence_transformers import SentenceTransformer
import pandas as pd
import numpy as np
TAG = "raynardj/xlsearch-cross-lang-search-zh-vs-classicical-cn"
encoder = SentenceTransformer(TAG)
# all_lines is a list of all your sentences
# all_lines 是一个你所有句子的列表, 可以是一本书, 按照句子分割, 也可以是很多很多书
all_lines = ["句子1","句子2",...]
vec = encoder.encode(all_lines, batch_size=32, show_progress_bar=True)
# consine距离搜索器
cosine = CosineSearch(vec)
def search(text):
enc = encoder.encode(text) # encode the search key
order = cosine(enc) # distance array
sentence_df = pd.DataFrame({"sentence":np.array(all_lines)[order[:5]]})
return sentence_df
```
将史记打成句子以后, 搜索效果是这样的:
```python
>>> search("他是一个很慷慨的人")
```
```
sentence
0 季布者,楚人也。为气任侠,有名於楚。
1 董仲舒为人廉直。
2 大将军为人仁善退让,以和柔自媚於上,然天下未有称也。
3 勃为人木彊敦厚,高帝以为可属大事。
4 石奢者,楚昭王相也。坚直廉正,无所阿避。
```
```python
>>> search("进入军营,必须缓缓牵着马骑")
```
```
sentence
0 壁门士吏谓从属车骑曰:将军约,军中不得驱驰。
1 起之为将,与士卒最下者同衣食。卧不设席,行不骑乘,亲裹赢粮,与士卒分劳苦。
2 既出,沛公留车骑,独骑一马,与樊哙等四人步从,从间道山下归走霸上军,而使张良谢项羽。
3 顷之,上行出中渭桥,有一人从穚下走出,乘舆马惊。
4 元狩四年春,上令大将军青、骠骑将军去病将各五万骑,步兵转者踵军数十万,而敢力战深入之士皆属骠骑。
```
## 其他资源清单
* [项目源代码 🌟, 欢迎+star提pr](https://github.com/raynardj/yuan)
* [跨语种搜索 🔎](https://huggingface.co/raynardj/xlsearch-cross-lang-search-zh-vs-classicical-cn)
* [现代文翻译古汉语的模型 ⛰](https://huggingface.co/raynardj/wenyanwen-chinese-translate-to-ancient)
* [古汉语到现代文的翻译模型, 输入可以是未断句的句子 🚀](https://huggingface.co/raynardj/wenyanwen-ancient-translate-to-modern)
* [断句模型 🗡](https://huggingface.co/raynardj/classical-chinese-punctuation-guwen-biaodian)
* [意境关键词 和 藏头写诗🤖](https://huggingface.co/raynardj/keywords-cangtou-chinese-poetry)
|
ffsouza/tiny-mbart-finetuned-en-to-ro
|
ffsouza
| 2021-11-30T00:39:57Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16_en_ro_pre_processed",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- wmt16_en_ro_pre_processed
metrics:
- bleu
model-index:
- name: tiny-mbart-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16_en_ro_pre_processed
type: wmt16_en_ro_pre_processed
args: enro
metrics:
- name: Bleu
type: bleu
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mbart-finetuned-en-to-ro
This model is a fine-tuned version of [sshleifer/tiny-mbart](https://huggingface.co/sshleifer/tiny-mbart) on the wmt16_en_ro_pre_processed dataset.
It achieves the following results on the evaluation set:
- Loss: 8.4792
- Bleu: 0.0
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:----:|:-------:|
| 8.2425 | 1.0 | 76290 | 8.4792 | 0.0 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
rossanez/opus-mt-finetuned-en-es
|
rossanez
| 2021-11-29T22:50:12Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: opus-mt-finetuned-en-es
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
args: en-es
metrics:
- name: Bleu
type: bleu
value: 21.5636
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-finetuned-en-es
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-es](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9813
- Bleu: 21.5636
- Gen Len: 30.0992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 2.09 | 1.0 | 4382 | 1.9813 | 21.5636 | 30.0992 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
BigSalmon/MrLincoln10
|
BigSalmon
| 2021-11-29T22:23:11Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln10")
```
```
How To Make Prompt:
Original: freedom of the press is a check against political corruption.
Edited: fundamental to the spirit of democracy, freedom of the press is a check against political corruption.
Edited 2: ever at odds with tyranny, freedom of the press is a check against political corruption.
Edited 3: never to be neglected, freedom of the press is a check against political corruption.
Original: solar is a beacon of achievement.
Edited: central to decoupling from the perils of unsustainable energy, solar is a beacon of achievement.
Edited 2: key to a future beyond fossil fuels, solar is a beacon of achievement.
Original: milan is nevertheless ambivalent towards his costly terms.
Edited: keen on contracting him, milan is nevertheless ambivalent towards his costly terms.
Edited 2: intent on securing his services, milan is nevertheless ambivalent towards his costly terms.
Original:
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: meteors are much harder to see, because they are only there for a fraction of a second.
Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second.
informal english:
````
|
Narsil/pet-segmentation
|
Narsil
| 2021-11-29T16:23:29Z | 6 | 9 |
generic
|
[
"generic",
"tf",
"image-segmentation",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2022-03-02T23:29:04Z |
---
tags:
- image-segmentation
- generic
library_name: generic
pipeline_tag: image-segmentation
dataset:
- oxfort-iit pets
license: apache-2.0
---
## Keras semantic segmentation models on the 🤗Hub! 🐶 🐕 🐩
Image classification task tells us about a class assigned to an image, and object detection task creates a boundary box on an object in an image. But what if we want to know about the shape of the image? Segmentation models helps us segment images and reveal their shapes. It has many variants. You can host your Keras segmentation models on the Hub.
Semantic segmentation models classify pixels, meaning, they assign a class (can be cat or dog) to each pixel. The output of a model looks like following.

We need to get the best prediction for every pixel.

This is still not readable. We have to convert this into different binary masks for each class and convert to a readable format by converting each mask into base64. We will return a list of dicts, and for each dictionary, we have the label itself, the base64 code and a score (semantic segmentation models don't return a score, so we have to return 1.0 for this case). You can find the full implementation in ```pipeline.py```.

Now that you know the expected output by the model, you can host your Keras segmentation models (and other semantic segmentation models) in the similar fashion. Try it yourself and host your segmentation models!

|
Jeska/BertjeWDialDataQA20k
|
Jeska
| 2021-11-29T15:35:11Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
model-index:
- name: BertjeWDialDataQA20k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataQA20k
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1713 | 1.0 | 1542 | 2.0098 |
| 2.0736 | 2.0 | 3084 | 1.9853 |
| 2.0543 | 3.0 | 4626 | 2.0134 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
raynardj/wenyanwen-chinese-translate-to-ancient
|
raynardj
| 2021-11-29T14:42:25Z | 136 | 49 |
transformers
|
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"translation",
"文言文",
"ancient",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- zh
- zh
tags:
- translation
- 文言文
- ancient
license: apache-2.0
widget:
- text: "轻轻的我走了,正如我轻轻的来。我轻轻的招手,作别西天的云彩。"
example_title: "再别康桥"
- text: "当恐惧逝去,我会打开心眼,看清它的轨迹。"
example_title: "沙丘"
- text: "暴力是无能者的最后手段"
example_title: "基地"
---
# From modern Chinese to Ancient Chinese
> This model translate modern Chinese to Classical Chinese, so I guess who's interested in the problemset can speak at least modern Chinese, so... let me continue the documentation in Chinese
* 从现代文到文言文的翻译器, 欢迎前往[github文言诗词项目页面:渊, 讨论&加⭐️ ](https://github.com/raynardj/yuan)
* 还有同款的[🤗文言文到现代文模型](https://huggingface.co/raynardj/wenyanwen-ancient-translate-to-modern),原文输入可以**断句** 也可以是**未断句**的哦
* 训练语料是就是九十多万句句对, [数据集链接📚](https://github.com/BangBOOM/Classical-Chinese)。
## 推荐的inference 通道
**注意**, 你必须将```generate```函数的```eos_token_id```设置为102就可以翻译出完整的语句, 不然翻译完了会有残留的语句(因为做熵的时候用pad标签=-100导致)。
目前huggingface 页面上compute按钮会有这个问题, 推荐使用以下代码来得到翻译结果🎻
```python
from transformers import (
EncoderDecoderModel,
AutoTokenizer
)
PRETRAINED = "raynardj/wenyanwen-chinese-translate-to-ancient"
tokenizer = AutoTokenizer.from_pretrained(PRETRAINED)
model = EncoderDecoderModel.from_pretrained(PRETRAINED)
def inference(text):
tk_kwargs = dict(
truncation=True,
max_length=128,
padding="max_length",
return_tensors='pt')
inputs = tokenizer([text,],**tk_kwargs)
with torch.no_grad():
return tokenizer.batch_decode(
model.generate(
inputs.input_ids,
attention_mask=inputs.attention_mask,
num_beams=3,
bos_token_id=101,
eos_token_id=tokenizer.sep_token_id,
pad_token_id=tokenizer.pad_token_id,
), skip_special_tokens=True)
```
## 目前版本的案例
> 大家如果有好玩的调戏案例, 也欢迎反馈
```python
>>> inference('你连一百块都不肯给我')
['不 肯 与 我 百 钱 。']
```
```python
>>> inference("他不能做长远的谋划")
['不 能 为 远 谋 。']
```
```python
>>> inference("我们要干一番大事业")
['吾 属 当 举 大 事 。']
```
```python
>>> inference("这感觉,已经不对,我努力,在挽回")
['此 之 谓 也 , 已 不 可 矣 , 我 勉 之 , 以 回 之 。']
```
```python
>>> inference("轻轻地我走了, 正如我轻轻地来, 我挥一挥衣袖,不带走一片云彩")
['轻 我 行 , 如 我 轻 来 , 挥 袂 不 携 一 片 云 。']
```
## 其他文言诗词的资源
* [项目源代码 🌟, 欢迎+star提pr](https://github.com/raynardj/yuan)
* [跨语种搜索 🔎](https://huggingface.co/raynardj/xlsearch-cross-lang-search-zh-vs-classicical-cn)
* [现代文翻译古汉语的模型 ⛰](https://huggingface.co/raynardj/wenyanwen-chinese-translate-to-ancient)
* [古汉语到现代文的翻译模型, 输入可以是未断句的句子 🚀](https://huggingface.co/raynardj/wenyanwen-ancient-translate-to-modern)
* [断句模型 🗡](https://huggingface.co/raynardj/classical-chinese-punctuation-guwen-biaodian)
* [意境关键词 和 藏头写诗🤖](https://huggingface.co/raynardj/keywords-cangtou-chinese-poetry)
|
BigSalmon/MrLincoln6
|
BigSalmon
| 2021-11-29T14:42:02Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln6")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: meteors are much harder to see, because they are only there for a fraction of a second.
Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second.
informal english:
````
|
google/tapas-mini-masklm
|
google
| 2021-11-29T14:15:38Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tapas",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
This model corresponds to **tapas_masklm_mini_reset** of the [original repository](https://github.com/google-research/tapas).
Here's how you can use it:
```python
from transformers import TapasTokenizer, TapasForMaskedLM
import pandas as pd
import torch
tokenizer = TapasTokenizer.from_pretrained("google/tapas-mini-masklm")
model = TapasForMaskedLM.from_pretrained("google/tapas-mini-masklm")
data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"],
'Age': ["56", "45", "59"],
'Number of movies': ["87", "53", "69"]
}
table = pd.DataFrame.from_dict(data)
query = "How many movies has Leonardo [MASK] Caprio played in?"
# prepare inputs
inputs = tokenizer(table=table, queries=query, padding="max_length", return_tensors="pt")
# forward pass
outputs = model(**inputs)
# return top 5 values and predictions
masked_index = torch.nonzero(inputs.input_ids.squeeze() == tokenizer.mask_token_id, as_tuple=False)
logits = outputs.logits[0, masked_index.item(), :]
probs = logits.softmax(dim=0)
values, predictions = probs.topk(5)
for value, pred in zip(values, predictions):
print(f"{tokenizer.decode([pred])} with confidence {value}")
```
|
xiongjie/realtime-SRGAN-for-anime
|
xiongjie
| 2021-11-29T13:46:51Z | 0 | 2 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
This is super resolution model to upscale anime like illustration image by 4x.
This model can upscale 256x256 image to 1024x1024 within around 20[ms] on GPU and around 250[ms] on CPU.
Example is [here](https://github.com/xiong-jie-y/ml-examples/tree/master/realtime_srgan_anime).
All the models in this repository is under MIT License.
|
google/tapas-medium-finetuned-tabfact
|
google
| 2021-11-29T13:09:54Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tapas",
"text-classification",
"sequence-classification",
"en",
"dataset:tab_fact",
"arxiv:2010.00571",
"arxiv:2004.02349",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- tapas
- sequence-classification
license: apache-2.0
datasets:
- tab_fact
---
# TAPAS medium model fine-tuned on Tabular Fact Checking (TabFact)
This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_medium_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is the one with absolute position embeddings:
- `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_medium`
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then
jointly train this randomly initialized classification head with the base model on TabFact.
## Intended uses & limitations
You can use this model for classifying whether a sentence is supported or refuted by the contents of a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 80,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 14 hours. The optimizer used is Adam with a learning rate of 2e-5, and a warmup
ratio of 0.05. See the [paper](https://arxiv.org/abs/2010.00571) for more details (appendix A2).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@inproceedings{2019TabFactA,
title={TabFact : A Large-scale Dataset for Table-based Fact Verification},
author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang},
booktitle = {International Conference on Learning Representations (ICLR)},
address = {Addis Ababa, Ethiopia},
month = {April},
year = {2020}
}
```
|
google/tapas-small-finetuned-sqa
|
google
| 2021-11-29T13:09:34Z | 523 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tapas",
"table-question-answering",
"en",
"dataset:msr_sqa",
"arxiv:2004.02349",
"arxiv:2010.00571",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
table-question-answering
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- tapas
license: apache-2.0
datasets:
- msr_sqa
---
# TAPAS small model fine-tuned on Sequential Question Answering (SQA)
This model has 2 versions which can be used. The default version corresponds to the `tapas_sqa_inter_masklm_small_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_sqa_inter_masklm_small` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Results on SQA - Dev Accuracy
Size | Reset | Dev Accuracy | Link
-------- | --------| -------- | ----
LARGE | noreset | 0.7223 | [tapas-large-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/no_reset)
LARGE | reset | 0.7289 | [tapas-large-finetuned-sqa](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/main)
BASE | noreset | 0.6737 | [tapas-base-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/no_reset)
BASE | reset | 0.6874 | [tapas-base-finetuned-sqa](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/main)
MEDIUM | noreset | 0.6464 | [tapas-medium-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/no_reset)
MEDIUM | reset | 0.6561 | [tapas-medium-finetuned-sqa](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/main)
**SMALL** | **noreset** | **0.5876** | [tapas-small-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/no_reset)
**SMALL** | **reset** | **0.6155** | [tapas-small-finetuned-sqa](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/main)
MINI | noreset | 0.4574 | [tapas-mini-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/no_reset)
MINI | reset | 0.5148 | [tapas-mini-finetuned-sqa](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/main))
TINY | noreset | 0.2004 | [tapas-tiny-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/no_reset)
TINY | reset | 0.2375 | [tapas-tiny-finetuned-sqa](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/main)
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head on top of the pre-trained model, and then jointly
train this randomly initialized classification head with the base model on SQA.
## Intended uses & limitations
You can use this model for answering questions related to a table in a conversational set-up.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 200,000 steps with maximum sequence length 512 and batch size of 128.
In this setup, fine-tuning takes around 20 hours. The optimizer used is Adam with a learning rate of 1.25e-5, and a warmup ratio
of 0.2. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the
`select_one_column` parameter of `TapasConfig`. See also table 12 of the [original paper](https://arxiv.org/abs/2004.02349).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@InProceedings{iyyer2017search-based,
author = {Iyyer, Mohit and Yih, Scott Wen-tau and Chang, Ming-Wei},
title = {Search-based Neural Structured Learning for Sequential Question Answering},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics},
year = {2017},
month = {July},
abstract = {Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semi-structured tables from Wikipedia, with 17,553 question-answer pairs in total. To solve this sequential question answering task, we propose a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search. Our model effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions.},
publisher = {Association for Computational Linguistics},
url = {https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/},
}
```
|
google/tapas-small-finetuned-tabfact
|
google
| 2021-11-29T13:07:47Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tapas",
"text-classification",
"sequence-classification",
"en",
"dataset:tab_fact",
"arxiv:2010.00571",
"arxiv:2004.02349",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- tapas
- sequence-classification
license: apache-2.0
datasets:
- tab_fact
---
# TAPAS small model fine-tuned on Tabular Fact Checking (TabFact)
This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_small_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is the one with absolute position embeddings:
- `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_small`
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then
jointly train this randomly initialized classification head with the base model on TabFact.
## Intended uses & limitations
You can use this model for classifying whether a sentence is supported or refuted by the contents of a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 80,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 14 hours. The optimizer used is Adam with a learning rate of 2e-5, and a warmup
ratio of 0.05. See the [paper](https://arxiv.org/abs/2010.00571) for more details (appendix A2).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@inproceedings{2019TabFactA,
title={TabFact : A Large-scale Dataset for Table-based Fact Verification},
author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang},
booktitle = {International Conference on Learning Representations (ICLR)},
address = {Addis Ababa, Ethiopia},
month = {April},
year = {2020}
}
```
|
google/tapas-base-finetuned-wikisql-supervised
|
google
| 2021-11-29T13:05:40Z | 502 | 9 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tapas",
"table-question-answering",
"en",
"dataset:wikisql",
"arxiv:2004.02349",
"arxiv:2010.00571",
"arxiv:1709.00103",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
table-question-answering
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- tapas
license: apache-2.0
datasets:
- wikisql
---
# TAPAS base model fine-tuned on WikiSQL (in a supervised fashion)
his model has 2 versions which can be used. The default version corresponds to the `tapas_wikisql_sqa_inter_masklm_base_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), and [WikiSQL](https://github.com/salesforce/WikiSQL). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_wikisql_sqa_inter_masklm_base` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQA and WikiSQL.
## Intended uses & limitations
You can use this model for answering questions related to a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
The authors did first convert the WikiSQL dataset into the format of SQA using automatic conversion scripts.
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 6.17164e-5, and a warmup
ratio of 0.1424. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and 12).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@article{DBLP:journals/corr/abs-1709-00103,
author = {Victor Zhong and
Caiming Xiong and
Richard Socher},
title = {Seq2SQL: Generating Structured Queries from Natural Language using
Reinforcement Learning},
journal = {CoRR},
volume = {abs/1709.00103},
year = {2017},
url = {http://arxiv.org/abs/1709.00103},
archivePrefix = {arXiv},
eprint = {1709.00103},
timestamp = {Mon, 13 Aug 2018 16:48:41 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1709-00103.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.