repo_id
stringlengths
4
122
author
stringlengths
2
38
model_type
stringlengths
2
33
files_per_repo
int64
2
39k
downloads_30d
int64
0
33.7M
library
stringlengths
2
37
likes
int64
0
4.87k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
33
languages
stringlengths
2
1.63k
datasets
stringlengths
2
2.58k
co2
stringlengths
6
258
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
46
prs_closed
int64
0
34
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
2 classes
has_text
bool
1 class
text_length
int64
201
598k
readme
stringlengths
0
598k
DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4
DrishtiSharma
wav2vec2
13
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['sr']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'sr']
true
true
true
3,216
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-sr-v4 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SR dataset. It achieves the following results on the evaluation set: - Loss: 0.5570 - Wer: 0.3038 ### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4 --dataset mozilla-foundation/common_voice_8_0 --config sr --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4 --dataset speech-recognition-community-v2/dev_data --config sr --split validation --chunk_length_s 10 --stride_length_s 1 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 800 - num_epochs: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 8.2934 | 7.5 | 300 | 2.9777 | 0.9995 | | 1.5049 | 15.0 | 600 | 0.5036 | 0.4806 | | 0.3263 | 22.5 | 900 | 0.5822 | 0.4055 | | 0.2008 | 30.0 | 1200 | 0.5609 | 0.4032 | | 0.1543 | 37.5 | 1500 | 0.5203 | 0.3710 | | 0.1158 | 45.0 | 1800 | 0.6458 | 0.3985 | | 0.0997 | 52.5 | 2100 | 0.6227 | 0.4013 | | 0.0834 | 60.0 | 2400 | 0.6048 | 0.3836 | | 0.0665 | 67.5 | 2700 | 0.6197 | 0.3686 | | 0.0602 | 75.0 | 3000 | 0.5418 | 0.3453 | | 0.0524 | 82.5 | 3300 | 0.5310 | 0.3486 | | 0.0445 | 90.0 | 3600 | 0.5599 | 0.3374 | | 0.0406 | 97.5 | 3900 | 0.5958 | 0.3327 | | 0.0358 | 105.0 | 4200 | 0.6017 | 0.3262 | | 0.0302 | 112.5 | 4500 | 0.5613 | 0.3248 | | 0.0285 | 120.0 | 4800 | 0.5659 | 0.3462 | | 0.0213 | 127.5 | 5100 | 0.5568 | 0.3206 | | 0.0215 | 135.0 | 5400 | 0.6524 | 0.3472 | | 0.0162 | 142.5 | 5700 | 0.6223 | 0.3458 | | 0.0137 | 150.0 | 6000 | 0.6625 | 0.3313 | | 0.0114 | 157.5 | 6300 | 0.5739 | 0.3336 | | 0.0101 | 165.0 | 6600 | 0.5906 | 0.3285 | | 0.008 | 172.5 | 6900 | 0.5982 | 0.3112 | | 0.0076 | 180.0 | 7200 | 0.5399 | 0.3094 | | 0.0071 | 187.5 | 7500 | 0.5387 | 0.2991 | | 0.0057 | 195.0 | 7800 | 0.5570 | 0.3038 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
DrishtiSharma/wav2vec2-large-xls-r-300m-vot-final-a2
DrishtiSharma
wav2vec2
15
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['vot']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'vot', 'robust-speech-event', 'hf-asr-leaderboard']
true
true
true
1,927
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-vot-final-a2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - VOT dataset. It achieves the following results on the evaluation set: - Loss: 2.8745 - Wer: 0.8333 ### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-vot-final-a2 --dataset mozilla-foundation/common_voice_8_0 --config vot --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Votic language isn't available in speech-recognition-community-v2/dev_data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 340 - num_epochs: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 11.1216 | 33.33 | 100 | 4.2848 | 1.0 | | 2.9982 | 66.67 | 200 | 2.8665 | 1.0 | | 1.5476 | 100.0 | 300 | 2.3022 | 0.8889 | | 0.2776 | 133.33 | 400 | 2.7480 | 0.8889 | | 0.1136 | 166.67 | 500 | 2.5383 | 0.8889 | | 0.0489 | 200.0 | 600 | 2.8745 | 0.8333 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
DrishtiSharma/wav2vec2-xls-r-300m-kk-n2
DrishtiSharma
wav2vec2
18
10
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['kk']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'kk', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
true
true
true
2,488
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - KK dataset. It achieves the following results on the evaluation set: - Loss: 0.7149 - Wer: 0.451 ### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-kk-n2 --dataset mozilla-foundation/common_voice_8_0 --config kk --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Kazakh language not found in speech-recognition-community-v2/dev_data! ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000222 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 150.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 9.6799 | 9.09 | 200 | 3.6119 | 1.0 | | 3.1332 | 18.18 | 400 | 2.5352 | 1.005 | | 1.0465 | 27.27 | 600 | 0.6169 | 0.682 | | 0.3452 | 36.36 | 800 | 0.6572 | 0.607 | | 0.2575 | 45.44 | 1000 | 0.6527 | 0.578 | | 0.2088 | 54.53 | 1200 | 0.6828 | 0.551 | | 0.158 | 63.62 | 1400 | 0.7074 | 0.5575 | | 0.1309 | 72.71 | 1600 | 0.6523 | 0.5595 | | 0.1074 | 81.8 | 1800 | 0.7262 | 0.5415 | | 0.087 | 90.89 | 2000 | 0.7199 | 0.521 | | 0.0711 | 99.98 | 2200 | 0.7113 | 0.523 | | 0.0601 | 109.09 | 2400 | 0.6863 | 0.496 | | 0.0451 | 118.18 | 2600 | 0.6998 | 0.483 | | 0.0378 | 127.27 | 2800 | 0.6971 | 0.4615 | | 0.0319 | 136.36 | 3000 | 0.7119 | 0.4475 | | 0.0305 | 145.44 | 3200 | 0.7181 | 0.459 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
DrishtiSharma/wav2vec2-xls-r-300m-mt-o1
DrishtiSharma
wav2vec2
18
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['mt']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'mt', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
true
true
true
1,755
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MT dataset. It achieves the following results on the evaluation set: - Loss: 0.1987 - Wer: 0.1920 ### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-mt-o1 --dataset mozilla-foundation/common_voice_8_0 --config mt --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Maltese language not found in speech-recognition-community-v2/dev_data! ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.1721 | 18.02 | 2000 | 0.3831 | 0.4066 | | 0.7849 | 36.04 | 4000 | 0.2191 | 0.2417 | | 0.6723 | 54.05 | 6000 | 0.2056 | 0.2134 | | 0.6015 | 72.07 | 8000 | 0.2008 | 0.2031 | | 0.5386 | 90.09 | 10000 | 0.1967 | 0.1953 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
DrishtiSharma/wav2vec2-xls-r-300m-pa-IN-r5
DrishtiSharma
wav2vec2
18
6
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['pa-IN']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'pa-IN', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
true
true
true
2,141
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset. It achieves the following results on the evaluation set: - Loss: 0.8881 - Wer: 0.4175 ### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-pa-IN-r5 --dataset mozilla-foundation/common_voice_8_0 --config pa-IN --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Punjabi language isn't available in speech-recognition-community-v2/dev_data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000111 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 200.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 10.695 | 18.52 | 500 | 3.5681 | 1.0 | | 3.2718 | 37.04 | 1000 | 2.3081 | 0.9643 | | 0.8727 | 55.56 | 1500 | 0.7227 | 0.5147 | | 0.3349 | 74.07 | 2000 | 0.7498 | 0.4959 | | 0.2134 | 92.59 | 2500 | 0.7779 | 0.4720 | | 0.1445 | 111.11 | 3000 | 0.8120 | 0.4594 | | 0.1057 | 129.63 | 3500 | 0.8225 | 0.4610 | | 0.0826 | 148.15 | 4000 | 0.8307 | 0.4351 | | 0.0639 | 166.67 | 4500 | 0.8967 | 0.4316 | | 0.0528 | 185.19 | 5000 | 0.8875 | 0.4238 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
DrishtiSharma/wav2vec2-xls-r-300m-rm-sursilv-d11
DrishtiSharma
wav2vec2
18
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['rm-sursilv']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'hf-asr-leaderboard', 'robust-speech-event']
true
true
true
1,928
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - RM-SURSILV dataset. It achieves the following results on the evaluation set: - Loss: 0.2511 - Wer: 0.2415 #### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-rm-sursilv-d11 --dataset mozilla-foundation/common_voice_8_0 --config rm-sursilv --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Romansh-Sursilv language isn't available in speech-recognition-community-v2/dev_data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 125.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 2.3958 | 17.44 | 1500 | 0.6808 | 0.6521 | | 0.9663 | 34.88 | 3000 | 0.3023 | 0.3718 | | 0.7963 | 52.33 | 4500 | 0.2588 | 0.3046 | | 0.6893 | 69.77 | 6000 | 0.2436 | 0.2718 | | 0.6148 | 87.21 | 7500 | 0.2521 | 0.2572 | | 0.5556 | 104.65 | 9000 | 0.2490 | 0.2442 | | 0.5258 | 122.09 | 10500 | 0.2515 | 0.2442 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
DrishtiSharma/wav2vec2-xls-r-300m-rm-vallader-d1
DrishtiSharma
wav2vec2
18
6
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['rm-vallader']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'rm-vallader', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
true
true
true
1,845
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - RM-VALLADER dataset. It achieves the following results on the evaluation set: - Loss: 0.2754 - Wer: 0.2831 ### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-rm-vallader-d1 --dataset mozilla-foundation/common_voice_8_0 --config rm-vallader --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Romansh-Vallader language not found in speech-recognition-community-v2/dev_data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.927 | 15.15 | 500 | 2.9196 | 1.0 | | 1.3835 | 30.3 | 1000 | 0.5879 | 0.5866 | | 0.7415 | 45.45 | 1500 | 0.3077 | 0.3316 | | 0.5575 | 60.61 | 2000 | 0.2735 | 0.2954 | | 0.4581 | 75.76 | 2500 | 0.2707 | 0.2802 | | 0.3977 | 90.91 | 3000 | 0.2785 | 0.2809 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
DrishtiSharma/wav2vec2-xls-r-myv-a1
DrishtiSharma
wav2vec2
18
6
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['myv']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'myv', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
true
true
true
2,893
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MYV dataset. It achieves the following results on the evaluation set: - Loss: 1.0356 - Wer: 0.6524 ### Evaluation Commands **1. To evaluate on mozilla-foundation/common_voice_8_0 with test split** python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-myv-a1 --dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs **2. To evaluate on speech-recognition-community-v2/dev_data** Erzya language not found in speech-recognition-community-v2/dev_data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 800 - num_epochs: 200.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 5.649 | 9.62 | 500 | 3.0038 | 1.0 | | 1.6272 | 19.23 | 1000 | 0.7362 | 0.7819 | | 1.1354 | 28.85 | 1500 | 0.6410 | 0.7111 | | 1.0424 | 38.46 | 2000 | 0.6907 | 0.7431 | | 0.9293 | 48.08 | 2500 | 0.7249 | 0.7102 | | 0.8246 | 57.69 | 3000 | 0.7422 | 0.6966 | | 0.7837 | 67.31 | 3500 | 0.7413 | 0.6813 | | 0.7147 | 76.92 | 4000 | 0.7873 | 0.6930 | | 0.6276 | 86.54 | 4500 | 0.8038 | 0.6677 | | 0.6041 | 96.15 | 5000 | 0.8240 | 0.6831 | | 0.5336 | 105.77 | 5500 | 0.8748 | 0.6749 | | 0.4705 | 115.38 | 6000 | 0.9006 | 0.6497 | | 0.43 | 125.0 | 6500 | 0.8954 | 0.6551 | | 0.3859 | 134.62 | 7000 | 0.9074 | 0.6614 | | 0.3342 | 144.23 | 7500 | 0.9693 | 0.6560 | | 0.3155 | 153.85 | 8000 | 1.0073 | 0.6691 | | 0.2673 | 163.46 | 8500 | 1.0170 | 0.6632 | | 0.2409 | 173.08 | 9000 | 1.0304 | 0.6709 | | 0.2189 | 182.69 | 9500 | 0.9965 | 0.6546 | | 0.1973 | 192.31 | 10000 | 1.0360 | 0.6551 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0 ### Evaluation Command !python eval.py \ --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 \ --dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs
DrishtiSharma/wav2vec2-xls-r-pa-IN-a1
DrishtiSharma
wav2vec2
18
6
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['pa-IN']
['common_voice']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer']
true
true
true
1,855
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset. It achieves the following results on the evaluation set: - Loss: 1.1508 - Wer: 0.4908 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.5841 | 9.26 | 500 | 3.2514 | 0.9941 | | 0.3992 | 18.52 | 1000 | 0.8790 | 0.6107 | | 0.2409 | 27.78 | 1500 | 1.0012 | 0.6366 | | 0.1447 | 37.04 | 2000 | 1.0167 | 0.6276 | | 0.1109 | 46.3 | 2500 | 1.0638 | 0.5653 | | 0.0797 | 55.56 | 3000 | 1.1447 | 0.5715 | | 0.0636 | 64.81 | 3500 | 1.1503 | 0.5316 | | 0.0466 | 74.07 | 4000 | 1.2227 | 0.5386 | | 0.0372 | 83.33 | 4500 | 1.1214 | 0.5225 | | 0.0239 | 92.59 | 5000 | 1.1375 | 0.4998 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
DrishtiSharma/wav2vec2-xls-r-sl-a1
DrishtiSharma
wav2vec2
18
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['sl']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'sl']
true
true
true
2,516
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset. It achieves the following results on the evaluation set: - Loss: 0.2756 - Wer: 0.2279 ### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-sl-a1 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-sl-a1 --dataset speech-recognition-community-v2/dev_data --config sl --split validation --chunk_length_s 10 --stride_length_s 1 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.3881 | 6.1 | 500 | 2.9710 | 1.0 | | 2.6401 | 12.2 | 1000 | 1.7677 | 0.9734 | | 1.5152 | 18.29 | 1500 | 0.5564 | 0.6011 | | 1.2191 | 24.39 | 2000 | 0.4319 | 0.4390 | | 1.0237 | 30.49 | 2500 | 0.3141 | 0.3175 | | 0.8892 | 36.59 | 3000 | 0.2748 | 0.2689 | | 0.8296 | 42.68 | 3500 | 0.2680 | 0.2534 | | 0.7602 | 48.78 | 4000 | 0.2820 | 0.2506 | | 0.7186 | 54.88 | 4500 | 0.2672 | 0.2398 | | 0.6887 | 60.98 | 5000 | 0.2729 | 0.2402 | | 0.6507 | 67.07 | 5500 | 0.2767 | 0.2361 | | 0.6226 | 73.17 | 6000 | 0.2817 | 0.2332 | | 0.6024 | 79.27 | 6500 | 0.2679 | 0.2279 | | 0.5787 | 85.37 | 7000 | 0.2837 | 0.2316 | | 0.5744 | 91.46 | 7500 | 0.2838 | 0.2284 | | 0.5556 | 97.56 | 8000 | 0.2763 | 0.2281 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
DrishtiSharma/wav2vec2-xls-r-sl-a2
DrishtiSharma
wav2vec2
18
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['sl']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'sl', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
true
true
true
2,399
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset. It achieves the following results on the evaluation set: - Loss: 0.2855 - Wer: 0.2401 ##Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-sl-a2 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Votic language not found in speech-recognition-community-v2/dev_data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.9294 | 6.1 | 500 | 2.9712 | 1.0 | | 2.8305 | 12.2 | 1000 | 1.7073 | 0.9479 | | 1.4795 | 18.29 | 1500 | 0.5756 | 0.6397 | | 1.3433 | 24.39 | 2000 | 0.4968 | 0.5424 | | 1.1766 | 30.49 | 2500 | 0.4185 | 0.4743 | | 1.0017 | 36.59 | 3000 | 0.3303 | 0.3578 | | 0.9358 | 42.68 | 3500 | 0.3003 | 0.3051 | | 0.8358 | 48.78 | 4000 | 0.3045 | 0.2884 | | 0.7647 | 54.88 | 4500 | 0.2866 | 0.2677 | | 0.7482 | 60.98 | 5000 | 0.2829 | 0.2585 | | 0.6943 | 67.07 | 5500 | 0.2782 | 0.2478 | | 0.6586 | 73.17 | 6000 | 0.2911 | 0.2537 | | 0.6425 | 79.27 | 6500 | 0.2817 | 0.2462 | | 0.6067 | 85.37 | 7000 | 0.2910 | 0.2436 | | 0.5974 | 91.46 | 7500 | 0.2875 | 0.2430 | | 0.5812 | 97.56 | 8000 | 0.2852 | 0.2396 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
Duc/distilbert-base-uncased-finetuned-ner
Duc
distilbert
13
9
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,555
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0604 - Precision: 0.9262 - Recall: 0.9375 - F1: 0.9318 - Accuracy: 0.9841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2424 | 1.0 | 878 | 0.0684 | 0.9096 | 0.9206 | 0.9150 | 0.9813 | | 0.0524 | 2.0 | 1756 | 0.0607 | 0.9188 | 0.9349 | 0.9268 | 0.9835 | | 0.0304 | 3.0 | 2634 | 0.0604 | 0.9262 | 0.9375 | 0.9318 | 0.9841 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
EColi/sponsorblock-base-v1
EColi
t5
15
1
transformers
1
text2text-generation
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
4,344
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # out This model is a fine-tuned version of [/1TB_SSD/SB_AI/out_epoch1/out/checkpoint-1115000/](https://huggingface.co//1TB_SSD/SB_AI/out_epoch1/out/checkpoint-1115000/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0645 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 2518227880 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | 0.0867 | 0.07 | 75000 | 0.0742 | | 0.0783 | 0.13 | 150000 | 0.0695 | | 0.0719 | 0.2 | 225000 | 0.0732 | | 0.0743 | 0.27 | 300000 | 0.0663 | | 0.0659 | 0.34 | 375000 | 0.0686 | | 0.0664 | 0.4 | 450000 | 0.0683 | | 0.0637 | 0.47 | 525000 | 0.0680 | | 0.0655 | 0.54 | 600000 | 0.0641 | | 0.0676 | 0.6 | 675000 | 0.0644 | | 0.0704 | 0.67 | 750000 | 0.0645 | | 0.0687 | 0.74 | 825000 | 0.0610 | | 0.059 | 0.81 | 900000 | 0.0652 | | 0.0666 | 0.87 | 975000 | 0.0619 | | 0.0624 | 0.94 | 1050000 | 0.0619 | | 0.0625 | 1.01 | 1125000 | 0.0667 | | 0.0614 | 1.03 | 1150000 | 0.0658 | | 0.0597 | 1.05 | 1175000 | 0.0683 | | 0.0629 | 1.07 | 1200000 | 0.0691 | | 0.0603 | 1.1 | 1225000 | 0.0678 | | 0.0601 | 1.12 | 1250000 | 0.0746 | | 0.0606 | 1.14 | 1275000 | 0.0691 | | 0.0671 | 1.16 | 1300000 | 0.0702 | | 0.0625 | 1.19 | 1325000 | 0.0661 | | 0.0617 | 1.21 | 1350000 | 0.0688 | | 0.0579 | 1.23 | 1375000 | 0.0679 | | 0.0663 | 1.25 | 1400000 | 0.0634 | | 0.0583 | 1.28 | 1425000 | 0.0638 | | 0.0623 | 1.3 | 1450000 | 0.0681 | | 0.0615 | 1.32 | 1475000 | 0.0670 | | 0.0592 | 1.34 | 1500000 | 0.0666 | | 0.0626 | 1.37 | 1525000 | 0.0666 | | 0.063 | 1.39 | 1550000 | 0.0647 | | 0.0648 | 1.41 | 1575000 | 0.0653 | | 0.0611 | 1.43 | 1600000 | 0.0700 | | 0.0622 | 1.46 | 1625000 | 0.0634 | | 0.0617 | 1.48 | 1650000 | 0.0651 | | 0.0613 | 1.5 | 1675000 | 0.0634 | | 0.0639 | 1.52 | 1700000 | 0.0661 | | 0.0615 | 1.54 | 1725000 | 0.0644 | | 0.0605 | 1.57 | 1750000 | 0.0662 | | 0.0622 | 1.59 | 1775000 | 0.0656 | | 0.0585 | 1.61 | 1800000 | 0.0633 | | 0.0628 | 1.63 | 1825000 | 0.0625 | | 0.0638 | 1.66 | 1850000 | 0.0662 | | 0.0599 | 1.68 | 1875000 | 0.0664 | | 0.0583 | 1.7 | 1900000 | 0.0668 | | 0.0543 | 1.72 | 1925000 | 0.0631 | | 0.06 | 1.75 | 1950000 | 0.0629 | | 0.0615 | 1.77 | 1975000 | 0.0644 | | 0.0587 | 1.79 | 2000000 | 0.0663 | | 0.0647 | 1.81 | 2025000 | 0.0654 | | 0.0604 | 1.84 | 2050000 | 0.0639 | | 0.0641 | 1.86 | 2075000 | 0.0636 | | 0.0604 | 1.88 | 2100000 | 0.0636 | | 0.0654 | 1.9 | 2125000 | 0.0652 | | 0.0588 | 1.93 | 2150000 | 0.0638 | | 0.0616 | 1.95 | 2175000 | 0.0657 | | 0.0598 | 1.97 | 2200000 | 0.0646 | | 0.0633 | 1.99 | 2225000 | 0.0645 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.17.0 - Tokenizers 0.10.3
ELiRF/NASCA
ELiRF
bart
10
6
transformers
0
summarization
true
false
false
null
['ca']
null
null
0
0
0
0
0
0
0
['summarization']
false
true
true
3,909
**IMPORTANT:** On the 5th of April 2022, we detected a mistake in the configuration file; thus, the model was not generating the summaries correctly, and it was underperforming in all scenarios. For this reason, if you had used the model until that day, we would be glad if you would re-evaluate the model if you are publishing some results with it. We apologize for the inconvenience and thank you for your understanding. # NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish Most of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish. # The NASca model News Abstractive Summarization for Catalan (NASca) is a Transformer encoder-decoder model, with the same hyper-parameters than BART, to perform summarization of Catalan news articles. It is pre-trained on a combination of several self-supervised tasks that help to increase the abstractivity of the generated summaries. Four pre-training tasks have been combined: sentence permutation, text infilling, Gap Sentence Generation, and Next Segment Generation. Catalan newspapers, the Catalan subset of the OSCAR corpus and Wikipedia articles in Catalan were used for pre-training the model (9.3GB of raw text -2.5 millions of documents-). NASca is finetuned for the summarization task on 636.596 (document, summary) pairs from the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA). ### BibTeX entry ```bibtex @Article{app11219872, AUTHOR = {Ahuir, Vicent and Hurtado, Lluís-F. and González, José Ángel and Segarra, Encarna}, TITLE = {NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish}, JOURNAL = {Applied Sciences}, VOLUME = {11}, YEAR = {2021}, NUMBER = {21}, ARTICLE-NUMBER = {9872}, URL = {https://www.mdpi.com/2076-3417/11/21/9872}, ISSN = {2076-3417}, DOI = {10.3390/app11219872} } ```
ELiRF/NASES
ELiRF
bart
10
20
transformers
1
summarization
true
false
false
null
['es']
null
null
0
0
0
0
0
0
0
['summarization']
false
true
true
3,871
**IMPORTANT:** On the 5th of April 2022, we detected a mistake in the configuration file; thus, the model was not generating the summaries correctly, and it was underperforming in all scenarios. For this reason, if you had used the model until that day, we would be glad if you would re-evaluate the model if you are publishing some results with it. We apologize for the inconvenience and thank you for your understanding. # NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish Most of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish. # The NASes model News Abstractive Summarization for Spanish (NASes) is a Transformer encoder-decoder model, with the same hyper-parameters than BART, to perform summarization of Spanish news articles. It is pre-trained on a combination of several self-supervised tasks that help to increase the abstractivity of the generated summaries. Four pre-training tasks have been combined: sentence permutation, text infilling, Gap Sentence Generation, and Next Segment Generation. Spanish newspapers, and Wikipedia articles in Spanish were used for pre-training the model (21GB of raw text -8.5 millions of documents-). NASes is finetuned for the summarization task on 1.802.919 (document, summary) pairs from the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA). ### BibTeX entry ```bibtex @Article{app11219872, AUTHOR = {Ahuir, Vicent and Hurtado, Lluís-F. and González, José Ángel and Segarra, Encarna}, TITLE = {NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish}, JOURNAL = {Applied Sciences}, VOLUME = {11}, YEAR = {2021}, NUMBER = {21}, ARTICLE-NUMBER = {9872}, URL = {https://www.mdpi.com/2076-3417/11/21/9872}, ISSN = {2076-3417}, DOI = {10.3390/app11219872} } ```
EMBEDDIA/crosloengual-bert
EMBEDDIA
bert
7
745
transformers
3
fill-mask
true
false
true
cc-by-4.0
['hr', 'sl', 'en', 'multilingual']
null
null
0
0
0
0
0
0
0
[]
false
true
true
962
# CroSloEngual BERT CroSloEngual BERT is a trilingual model, using bert-base architecture, trained on Croatian, Slovenian, and English corpora. Focusing on three languages, the model performs better than [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased), while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't. Evaluation is presented in our article: ``` @Inproceedings{ulcar-robnik2020finest, author = "Ulčar, M. and Robnik-Šikonja, M.", year = 2020, title = "{FinEst BERT} and {CroSloEngual BERT}: less is more in multilingual models", editor = "Sojka, P and Kopeček, I and Pala, K and Horák, A", booktitle = "Text, Speech, and Dialogue {TSD 2020}", series = "Lecture Notes in Computer Science", volume = 12284, publisher = "Springer", url = "https://doi.org/10.1007/978-3-030-58323-1_11", } ``` The preprint is available at [arxiv.org/abs/2006.07890](https://arxiv.org/abs/2006.07890).
EMBEDDIA/est-roberta
EMBEDDIA
camembert
9
174
transformers
2
fill-mask
true
false
false
cc-by-sa-4.0
['et']
null
null
0
0
0
0
0
0
0
[]
false
true
true
579
# Usage Load in transformers library with: ``` from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("EMBEDDIA/est-roberta") model = AutoModelForMaskedLM.from_pretrained("EMBEDDIA/est-roberta") ``` # Est-RoBERTa Est-RoBERTa model is a monolingual Estonian BERT-like model. It is closely related to French Camembert model https://camembert-model.fr/. The Estonian corpora used for training the model have 2.51 billion tokens in total. The subword vocabulary contains 40,000 tokens. Est-RoBERTa was trained for 40 epochs.
EMBEDDIA/finest-bert
EMBEDDIA
bert
7
463
transformers
2
fill-mask
true
false
true
cc-by-4.0
['fi', 'et', 'en', 'multilingual']
null
null
0
0
0
0
0
0
0
[]
false
true
true
948
# FinEst BERT FinEst BERT is a trilingual model, using bert-base architecture, trained on Finnish, Estonian, and English corpora. Focusing on three languages, the model performs better than [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased), while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't. Evaluation is presented in our article: ``` @Inproceedings{ulcar-robnik2020finest, author = "Ulčar, M. and Robnik-Šikonja, M.", year = 2020, title = "{FinEst BERT} and {CroSloEngual BERT}: less is more in multilingual models", editor = "Sojka, P and Kopeček, I and Pala, K and Horák, A", booktitle = "Text, Speech, and Dialogue {TSD 2020}", series = "Lecture Notes in Computer Science", volume = 12284, publisher = "Springer", url = "https://doi.org/10.1007/978-3-030-58323-1_11", } ``` The preprint is available at [arxiv.org/abs/2006.07890](https://arxiv.org/abs/2006.07890).
EMBEDDIA/litlat-bert
EMBEDDIA
xlm-roberta
9
32
transformers
3
fill-mask
true
false
false
cc-by-sa-4.0
['lt', 'lv', 'en', 'multilingual']
null
null
0
0
0
0
0
0
0
[]
false
true
true
887
# LitLat BERT LitLat BERT is a trilingual model, using xlm-roberta-base architecture, trained on Lithuanian, Latvian, and English corpora. Focusing on three languages, the model performs better than [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased), while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't. ### Named entity recognition evaluation We compare LitLat BERT with multilingual BERT (mBERT), XLM-RoBERTa (XLM-R) and monolingual Latvian BERT (LVBERT) (Znotins and Barzdins, 2020). The report the results as a macro F1 score of 3 named entity classes shared in all three datasets: person, location, organization. Language | mBERT | XLM-R | LVBERT | LitLat ---|---|---|---|--- Latvian | 0.830 | 0.865 | 0.797 | **0.881** Lithuanian | 0.797 | 0.817 | / | **0.850** English | 0.939 | 0.937 | / | **0.943**
EMBEDDIA/sloberta
EMBEDDIA
camembert
9
683
transformers
3
fill-mask
true
false
false
cc-by-sa-4.0
['sl']
null
null
0
0
0
0
0
0
0
[]
false
true
true
943
# Usage Load in transformers library with: ``` from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("EMBEDDIA/sloberta") model = AutoModelForMaskedLM.from_pretrained("EMBEDDIA/sloberta") ``` # SloBERTa SloBERTa model is a monolingual Slovene BERT-like model. It is closely related to French Camembert model https://camembert-model.fr/. The corpora used for training the model have 3.47 billion tokens in total. The subword vocabulary contains 32,000 tokens. The scripts and programs used for data preparation and training the model are available on https://github.com/clarinsi/Slovene-BERT-Tool SloBERTa was trained for 200,000 iterations or about 98 epochs. ## Corpora The following corpora were used for training the model: * Gigafida 2.0 * Kas 1.0 * Janes 1.0 (only Janes-news, Janes-forum, Janes-blog, Janes-wiki subcorpora) * Slovenian parliamentary corpus siParl 2.0 * slWaC
EMBO/bio-lm
EMBO
roberta
7
6
transformers
0
fill-mask
true
false
true
null
['english']
['EMBO/biolang']
null
1
1
0
0
0
0
0
['language model']
false
true
true
2,147
# bio-lm ## Model description This model is a [RoBERTa base pre-trained model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). ## Intended uses & limitations #### How to use The intended use of this model is to be fine-tuned for downstream tasks, token classification in particular. To have a quick check of the model as-is in a fill-mask task: ```python from transformers import pipeline, RobertaTokenizerFast tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512) text = "Let us try this model to see if it <mask>." fill_mask = pipeline( "fill-mask", model='EMBO/bio-lm', tokenizer=tokenizer ) fill_mask(text) ``` #### Limitations and bias This model should be fine-tuned on a specifi task like token classification. The model must be used with the `roberta-base` tokenizer. ## Training data The model was trained with a masked language modeling taskon the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang) wich includes 12Mio examples from abstracts and figure legends extracted from papers published in life sciences. ## Training procedure The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs. Training code is available at https://github.com/source-data/soda-roberta - Command: `python -m lm.train /data/json/oapmc_abstracts_figs/ MLM` - Tokenizer vocab size: 50265 - Training data: EMBO/biolang MLM - Training with: 12005390 examples - Evaluating on: 36713 examples - Epochs: 3.0 - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - tensorboard run: lm-MLM-2021-01-27T15-17-43.113766 End of training: ``` trainset: 'loss': 0.8653350830078125 validation set: 'eval_loss': 0.8192330598831177, 'eval_recall': 0.8154601116513597 ``` ## Eval results Eval on test set: ``` recall: 0.814471959728645 ```
EMBO/sd-ner
EMBO
roberta
11
12
transformers
0
token-classification
true
false
true
agpl-3.0
['english']
['EMBO/sd-nlp']
null
1
1
0
0
0
0
0
['token classification']
false
true
true
3,799
# sd-ner ## Model description This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of English scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It was then fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `NER` configuration to perform Named Entity Recognition of bioentities. ## Intended uses & limitations #### How to use The intended use of this model is for Named Entity Recognition of biological entities used in SourceData annotations (https://sourcedata.embo.org), including small molecules, gene products (genes and proteins), subcellular components, cell line and cell types, organ and tissues, species as well as experimental methods. To have a quick check of the model: ```python from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification example = """<s> F. Western blot of input and eluates of Upf1 domains purification in a Nmd4-HA strain. The band with the # might corresponds to a dimer of Upf1-CH, bands marked with a star correspond to residual signal with the anti-HA antibodies (Nmd4). Fragments in the eluate have a smaller size because the protein A part of the tag was removed by digestion with the TEV protease. G6PDH served as a loading control in the input samples </s>""" tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512) model = RobertaForTokenClassification.from_pretrained('EMBO/sd-ner') ner = pipeline('ner', model, tokenizer=tokenizer) res = ner(example) for r in res: print(r['word'], r['entity']) ``` #### Limitations and bias The model must be used with the `roberta-base` tokenizer. ## Training data The model was trained for token classification using the [EMBO/sd-nlp dataset](https://huggingface.co/datasets/EMBO/sd-nlp) dataset which includes manually annotated examples. ## Training procedure The training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs. Training code is available at https://github.com/source-data/soda-roberta - Model fine-tuned: EMBO/bio-lm - Tokenizer vocab size: 50265 - Training data: EMBO/sd-nlp - Dataset configuration: NER - Training with 48771 examples. - Evaluating on 13801 examples. - Training on 15 features: O, I-SMALL_MOLECULE, B-SMALL_MOLECULE, I-GENEPROD, B-GENEPROD, I-SUBCELLULAR, B-SUBCELLULAR, I-CELL, B-CELL, I-TISSUE, B-TISSUE, I-ORGANISM, B-ORGANISM, I-EXP_ASSAY, B-EXP_ASSAY - Epochs: 0.6 - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 0.0001 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 ## Eval results Testing on 7178 examples of test set with `sklearn.metrics`: ``` precision recall f1-score support CELL 0.69 0.81 0.74 5245 EXP_ASSAY 0.56 0.57 0.56 10067 GENEPROD 0.77 0.89 0.82 23587 ORGANISM 0.72 0.82 0.77 3623 SMALL_MOLECULE 0.70 0.80 0.75 6187 SUBCELLULAR 0.65 0.72 0.69 3700 TISSUE 0.62 0.73 0.67 3207 micro avg 0.70 0.79 0.74 55616 macro avg 0.67 0.77 0.72 55616 weighted avg 0.70 0.79 0.74 55616 {'test_loss': 0.1830928772687912, 'test_accuracy_score': 0.9334821000160841, 'test_precision': 0.6987463009514112, 'test_recall': 0.789682825086306, 'test_f1': 0.7414366506288511, 'test_runtime': 61.0547, 'test_samples_per_second': 117.567, 'test_steps_per_second': 1.851} ```
EMBO/sd-panelization
EMBO
roberta
11
6
transformers
0
token-classification
true
false
true
agpl-3.0
['english']
['EMBO/sd-nlp']
null
1
1
0
0
0
0
0
['token classification']
false
true
true
3,248
# sd-panelization ## Model description This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It was then fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `PANELIZATION` task to perform 'parsing' or 'segmentation' of figure legends into fragments corresponding to sub-panels. Figures are usually composite representations of results obtained with heterogeneous experimental approaches and systems. Breaking figures into panels allows identifying more coherent descriptions of individual scientific experiments. ## Intended uses & limitations #### How to use The intended use of this model is for 'parsing' figure legends into sub-fragments corresponding to individual panels as used in SourceData annotations (https://sourcedata.embo.org). To have a quick check of the model: ```python from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification example = """Fig 4. a, Volume density of early (Avi) and late (Avd) autophagic vacuoles.a, Volume density of early (Avi) and late (Avd) autophagic vacuoles from four independent cultures. Examples of Avi and Avd are shown in b and c, respectively. Bars represent 0.4����m. d, Labelling density of cathepsin-D as estimated in two independent experiments. e, Labelling density of LAMP-1.""" tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512) model = RobertaForTokenClassification.from_pretrained('EMBO/sd-panelization') ner = pipeline('ner', model, tokenizer=tokenizer) res = ner(example) for r in res: print(r['word'], r['entity']) ``` #### Limitations and bias The model must be used with the `roberta-base` tokenizer. ## Training data The model was trained for token classification using the [`EMBO/sd-nlp PANELIZATION`](https://huggingface.co/datasets/EMBO/sd-nlp) dataset which includes manually annotated examples. ## Training procedure The training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs. Training code is available at https://github.com/source-data/soda-roberta - Model fine-tuned: EMBO/bio-lm - Tokenizer vocab size: 50265 - Training data: EMBO/sd-nlp - Dataset configuration: PANELIZATION - TTraining with 2175 examples. - Evaluating on 622 examples. - Training on 2 features: `O`, `B-PANEL_START` - Epochs: 1.3 - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 0.0001 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 ## Eval results Testing on 1802 examples from test set with `sklearn.metrics`: ``` precision recall f1-score support PANEL_START 0.89 0.95 0.92 5427 micro avg 0.89 0.95 0.92 5427 macro avg 0.89 0.95 0.92 5427 weighted avg 0.89 0.95 0.92 5427 ```
EasthShin/Android_Ios_Classification
EasthShin
bert
8
1
transformers
0
text-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
1,265
## Bert-base-uncased for Android-Ios Question Classification **Code**: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/EastHShin/Android-Ios-Classification-Workspace) <br> **Android-Ios-Classification DEMO**: [Ainize Endpoint](https://main-android-ios-classification-east-h-shin.endpoint.ainize.ai/) <br> **Demo web Code**: [Github](https://github.com/EastHShin/Android-Ios-Classification) <br> **Android-Ios-Classification API**: [Ainize API](https://ainize.ai/EastHShin/Android-Ios-Classification) <br> <br> ## Overview **Language model**: bert-base-cased <br> **Language**: English <br> **Training data**: Question classification Android-Ios dataset from [Kaggle](https://www.kaggle.com/xhlulu/question-classification-android-or-ios) ## Usage ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_path = "EasthShin/Android_Ios_Classification" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForSequenceClassification.from_pretrained(model_path) classifier = pipeline('text-classification', model=model_path, tokenizer=tokenizer) question = "I bought goodnote in Appstore" result = dict() result[0] = classifier(question)[0] ```
EasthShin/Klue-CommonSense-model
EasthShin
bert
8
6
transformers
1
question-answering
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
1,699
#### Klue-bert base for Common Sense QA #### Klue-CommonSense-model DEMO: [Ainize DEMO](https://main-klue-common-sense-qa-east-h-shin.endpoint.ainize.ai/) #### Klue-CommonSense-model API: [Ainize API](https://ainize.ai/EastHShin/Klue-CommonSense_QA?branch=main) ### Overview **Language model**: klue/bert-base <br> **Language**: Korean <br> **Downstream-task**: Extractive QA <br> **Training data**: Common sense Data from [Mindslab](https://mindslab.ai:8080/kr/company) <br> **Eval data**: Common sense Data from [Mindslab](https://mindslab.ai:8080/kr/company) <br> **Code**: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/EastHShin/Klue-CommonSense-workspace) <br> ### Usage ### In Transformers ``` from transformers import AutoModelForQuestionAnswering, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("EasthShin/Klue-CommonSense-model") model = AutoModelForQuestionAnswering.from_pretrained("EasthShin/Klue-CommonSense-model") context = "your context" question = "your question" encodings = tokenizer(context, question, max_length=512, truncation=True, padding="max_length", return_token_type_ids=False) encodings = {key: torch.tensor([val]) for key, val in encodings.items()} input_ids = encodings["input_ids"] attention_mask = encodings["attention_mask"] pred = model(input_ids, attention_mask=attention_mask) start_logits, end_logits = pred.start_logits, pred.end_logits token_start_index, token_end_index = start_logits.argmax(dim=-1), end_logits.argmax(dim=-1) pred_ids = input_ids[0][token_start_index: token_end_index + 1] prediction = tokenizer.decode(pred_ids) ```
EasthShin/Youth_Chatbot_Kogpt2-base
EasthShin
gpt2
7
3
transformers
0
text-generation
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
1,451
## Youth_Chatbot_KoGPT2-base **Demo Web**: [Ainize Endpoint](https://main-youth-chatbot-ko-gpt2-base-east-h-shin.endpoint.ainize.ai/) <br> **Demo Web Code**: [Github](https://github.com/EastHShin/Youth_Chatbot_KoGPT2-base) <br> **Youth-Chatbot API**: [Ainize API](https://ainize.ai/EastHShin/Youth_Chatbot_KoGPT2-base_API?branch=main) <br> <br> ## Overview **Language model**: KoGPT2 <br> **Language**: Korean <br> **Training data**: [Aihub](https://aihub.or.kr/aidata/7978) ## Usage ``` from transformers import PreTrainedTokenizerFast, GPT2LMHeadModel U_TKN = '<usr>' S_TKN = '<sys>' MASK = '<unused0>' SENT = '<unused1>' tokenizer = PreTrainedTokenizerFast.from_pretrained("EasthShin/Youth_Chatbot_Kogpt2-base", bos_token='</s>', eos_token='</s>', unk_token='<unk>', pad_token='<pad>', mask_token=MASK) model = GPT2LMHeadModel.from_pretrained('EasthShin/Youth_Chatbot_Kogpt2-base') input_ids = tokenizer.encode(U_TKN + {your text} + sent + S_TKN) gen_ids = model.generate(torch.tensor([input_ids]), max_length=128, repetition_penalty= 2.0, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id, bos_token_id=tokenizer.bos_token_id, use_cache=True) generated = tokenizer.decode(gen_ids[0, :].tolist()) print(generated) ```
Ebtihal/AraBertMo_base_V1
Ebtihal
bert
7
3
transformers
0
fill-mask
true
false
false
null
['ar']
['OSCAR']
null
0
0
0
0
0
0
0
Fill-Mask
false
true
true
1,491
# Arabic BERT Model **AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERTMo_base uses the same BERT-Base config. AraBERTMo_base now comes in 10 new variants All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name. Checkpoints are available in PyTorch formats. ## Pretraining Corpus `AraBertMo_base_V1' model was pre-trained on ~3 million words: - [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar". ## Training results this model achieves the following results: | Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss| |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:| | Fill-Mask| 10010| 1 | 64 | 157 | 2m 2s | 9.0183 | ## Load Pretrained Model You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V1") model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V1") ``` ## This model was built for master's degree research in an organization: - [University of kufa](https://uokufa.edu.iq/). - [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/). - **Department of Computer Science**
Ebtihal/AraBertMo_base_V2
Ebtihal
bert
7
2
transformers
0
fill-mask
true
false
false
null
['ar']
['OSCAR']
null
0
0
0
0
0
0
0
Fill-Mask
false
true
true
1,491
# Arabic BERT Model **AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERTMo_base uses the same BERT-Base config. AraBERTMo_base now comes in 10 new variants All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name. Checkpoints are available in PyTorch formats. ## Pretraining Corpus `AraBertMo_base_V2' model was pre-trained on ~3 million words: - [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar". ## Training results this model achieves the following results: | Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss| |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:| | Fill-Mask| 20020| 2 | 64 | 626 | 19m 2s | 8.437 | ## Load Pretrained Model You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V2") model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V2") ``` ## This model was built for master's degree research in an organization: - [University of kufa](https://uokufa.edu.iq/). - [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/). - **Department of Computer Science**
Ebtihal/AraBertMo_base_V3
Ebtihal
bert
7
2
transformers
0
fill-mask
true
false
false
null
['ar']
['OSCAR']
null
0
0
0
0
0
0
0
Fill-Mask
false
true
true
1,497
# Arabic BERT Model **AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERTMo_base uses the same BERT-Base config. AraBERTMo_base now comes in 10 new variants All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name. Checkpoints are available in PyTorch formats. ## Pretraining Corpus `AraBertMo_base_V3' model was pre-trained on ~3 million words: - [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar". ## Training results this model achieves the following results: | Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss| |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:| | Fill-Mask| 30024| 3 | 64 | 1410 | 3h 10m 31s | 8.0201 | ## Load Pretrained Model You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V3") model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V3") ``` ## This model was built for master's degree research in an organization: - [University of kufa](https://uokufa.edu.iq/). - [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/). - **Department of Computer Science**
Ebtihal/AraBertMo_base_V4
Ebtihal
bert
7
2
transformers
0
fill-mask
true
false
false
null
['ar']
['OSCAR']
null
0
0
0
0
0
0
0
Fill-Mask
false
true
true
1,497
# Arabic BERT Model **AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERTMo_base uses the same BERT-Base config. AraBERTMo_base now comes in 10 new variants All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name. Checkpoints are available in PyTorch formats. ## Pretraining Corpus `AraBertMo_base_V4' model was pre-trained on ~3 million words: - [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar". ## Training results this model achieves the following results: | Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss| |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:| | Fill-Mask| 40032| 4 | 64 | 2500 | 5h 10m 20s | 7.6544 | ## Load Pretrained Model You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V4") model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V4") ``` ## This model was built for master's degree research in an organization: - [University of kufa](https://uokufa.edu.iq/). - [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/). - **Department of Computer Science**
Ebtihal/AraBertMo_base_V5
Ebtihal
bert
7
2
transformers
0
fill-mask
true
false
false
null
['ar']
['OSCAR']
null
0
0
0
0
0
0
0
Fill-Mask
false
true
true
1,497
# Arabic BERT Model **AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERTMo_base uses the same BERT-Base config. AraBERTMo_base now comes in 10 new variants All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name. Checkpoints are available in PyTorch formats. ## Pretraining Corpus `AraBertMo_base_V5' model was pre-trained on ~3 million words: - [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar". ## Training results this model achieves the following results: | Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss| |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:| | Fill-Mask| 50046| 5 | 64 | 3910 | 6h 49m 59s | 7.4599 | ## Load Pretrained Model You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V5") model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V5") ``` ## This model was built for master's degree research in an organization: - [University of kufa](https://uokufa.edu.iq/). - [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/). - **Department of Computer Science**
Ebtihal/AraBertMo_base_V6
Ebtihal
bert
7
2
transformers
0
fill-mask
true
false
false
null
['ar']
['OSCAR']
null
0
0
0
0
0
0
0
Fill-Mask
false
true
true
1,495
# Arabic BERT Model **AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERTMo_base uses the same BERT-Base config. AraBERTMo_base now comes in 10 new variants All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name. Checkpoints are available in PyTorch formats. ## Pretraining Corpus `AraBertMo_base_V6' model was pre-trained on ~3 million words: - [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar". ## Training results this model achieves the following results: | Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss| |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:| | Fill-Mask| 50046| 6 | 64 | 4692 | 5h 41m 9s | 7.3099 | ## Load Pretrained Model You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V6") model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V6") ``` ## This model was built for master's degree research in an organization: - [University of kufa](https://uokufa.edu.iq/). - [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/). - **Department of Computer Science**
Ebtihal/AraBertMo_base_V7
Ebtihal
bert
8
2
transformers
0
fill-mask
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
1,735
Arabic Model AraBertMo_base_V7 --- language: ar tags: Fill-Mask datasets: OSCAR widget: - text: " السلام عليكم ورحمة[MASK] وبركاتة" - text: " اهلا وسهلا بكم في [MASK] من سيربح المليون" - text: " مرحبا بك عزيزي الزائر [MASK] موقعنا " --- # Arabic BERT Model **AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERTMo_base uses the same BERT-Base config. AraBERTMo_base now comes in 10 new variants All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name. Checkpoints are available in PyTorch formats. ## Pretraining Corpus `AraBertMo_base_V7' model was pre-trained on ~3 million words: - [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar". ## Training results this model achieves the following results: | Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss| |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:| | Fill-Mask| 50046| 7 | 64 | 5915 | 5h 23m 5s | 7.1381 | ## Load Pretrained Model You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V7") model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V7") ``` ## This model was built for master's degree research in an organization: - [University of kufa](https://uokufa.edu.iq/). - [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/). - **Department of Computer Science**
Ebtihal/AraBertMo_base_V8
Ebtihal
bert
7
2
transformers
0
fill-mask
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
1,729
Arabic Model AraBertMo_base_V8 --- language: ar tags: Fill-Mask datasets: OSCAR widget: - text: " السلام عليكم ورحمة[MASK] وبركاتة" - text: " اهلا وسهلا بكم في [MASK] من سيربح المليون" - text: " مرحبا بك عزيزي الزائر [MASK] موقعنا " --- # Arabic BERT Model **AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERTMo_base uses the same BERT-Base config. AraBERTMo_base now comes in 10 new variants All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name. Checkpoints are available in PyTorch formats. ## Pretraining Corpus `AraBertMo_base_V8' model was pre-trained on ~3 million words: [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar". ## Training results this model achieves the following results: | Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss| |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:| | Fill-Mask| 40032| 8 | 64 | 5008 | 10h 5m 57s | 7.2164 | ## Load Pretrained Model You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V8") model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V8") ``` ## This model was built for master's degree research in an organization: - [University of kufa](https://uokufa.edu.iq/). - [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/). - **Department of Computer Science**
Ebtihal/AraBertMo_base_V9
Ebtihal
bert
8
2
transformers
0
fill-mask
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
1,736
Arabic Model AraBertMo_base_V9 --- language: ar tags: Fill-Mask datasets: OSCAR widget: - text: " السلام عليكم ورحمة[MASK] وبركاتة" - text: " اهلا وسهلا بكم في [MASK] من سيربح المليون" - text: " مرحبا بك عزيزي الزائر [MASK] موقعنا " --- # Arabic BERT Model **AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERTMo_base uses the same BERT-Base config. AraBERTMo_base now comes in 10 new variants All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name. Checkpoints are available in PyTorch formats. ## Pretraining Corpus `AraBertMo_base_V9' model was pre-trained on ~3 million words: - [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar". ## Training results this model achieves the following results: | Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss| |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:| | Fill-Mask| 30024| 9 | 64 | 4230 | 7h 57m 42s | 7.3264 | ## Load Pretrained Model You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V9") model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V9") ``` ## This model was built for master's degree research in an organization: - [University of kufa](https://uokufa.edu.iq/). - [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/). - **Department of Computer Science**
Edomonndo/opus-mt-en-ro-finetuned-en-to-ro
Edomonndo
marian
13
4
transformers
0
text2text-generation
true
false
false
null
null
['wmt16']
null
0
0
0
0
0
0
0
['generated_from_trainer']
false
true
true
1,273
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ro-finetuned-en-to-ro This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.2886 - Bleu: 28.1641 - Gen Len: 34.1071 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 0.7436 | 1.0 | 38145 | 1.2886 | 28.1641 | 34.1071 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
Edomonndo/opus-mt-ja-en-finetuned-ja-to-en_test
Edomonndo
marian
157
2
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
false
true
true
1,955
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-ja-en-finetuned-ja-to-en_test This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 0.4737 - Bleu: 80.2723 - Gen Len: 16.5492 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | 1.1237 | 1.0 | 247 | 0.6131 | 60.9383 | 16.4152 | | 0.5395 | 2.0 | 494 | 0.5274 | 67.5705 | 16.2883 | | 0.3584 | 3.0 | 741 | 0.5122 | 71.3098 | 16.3777 | | 0.2563 | 4.0 | 988 | 0.4887 | 73.6639 | 16.401 | | 0.138 | 5.0 | 1235 | 0.4796 | 76.7942 | 16.4873 | | 0.0979 | 6.0 | 1482 | 0.4849 | 76.9404 | 16.6162 | | 0.0792 | 7.0 | 1729 | 0.4806 | 78.9831 | 16.5442 | | 0.0569 | 8.0 | 1976 | 0.4765 | 79.3461 | 16.4873 | | 0.0299 | 9.0 | 2223 | 0.4751 | 79.7901 | 16.4863 | | 0.0204 | 10.0 | 2470 | 0.4737 | 80.2723 | 16.5492 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu111 - Datasets 1.10.2 - Tokenizers 0.10.3
Edomonndo/opus-mt-ja-en-finetuned-ja-to-en_xml
Edomonndo
marian
17
1
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
false
true
true
1,953
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-ja-en-finetuned-ja-to-en_xml This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 0.7520 - Bleu: 73.8646 - Gen Len: 27.0884 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | 1.0512 | 1.0 | 748 | 0.8333 | 59.8234 | 27.905 | | 0.6076 | 2.0 | 1496 | 0.7817 | 62.5606 | 26.1834 | | 0.4174 | 3.0 | 2244 | 0.7817 | 64.8346 | 28.2918 | | 0.2971 | 4.0 | 2992 | 0.7653 | 67.6013 | 27.2222 | | 0.2172 | 5.0 | 3740 | 0.7295 | 69.4017 | 27.0174 | | 0.1447 | 6.0 | 4488 | 0.7522 | 68.8355 | 28.2865 | | 0.0953 | 7.0 | 5236 | 0.7596 | 71.4743 | 27.1861 | | 0.0577 | 8.0 | 5984 | 0.7469 | 72.0684 | 26.921 | | 0.04 | 9.0 | 6732 | 0.7526 | 73.2821 | 27.1365 | | 0.0213 | 10.0 | 7480 | 0.7520 | 73.8646 | 27.0884 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.10.0+cu111 - Datasets 1.10.2 - Tokenizers 0.10.3
Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-portuguese
Edresson
wav2vec2
14
6
transformers
1
automatic-speech-recognition
true
false
false
apache-2.0
['pt']
['Common Voice']
null
0
0
0
0
0
0
0
['audio', 'speech', 'wav2vec2', 'pt', 'portuguese-speech-corpus', 'automatic-speech-recognition', 'speech', 'PyTorch']
false
true
true
1,475
# Wav2vec2 Large 100k Voxpopuli fine-tuned with Common Voice and TTS-Portuguese Corpus in Portuguese [Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) fine-tuned in Portuguese using the Common Voice 7.0 and TTS-Portuguese Corpus. # Use this model ```python from transformers import AutoTokenizer, Wav2Vec2ForCTC tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-portuguese") model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-portuguese") ``` # Results For the results check the [paper](https://arxiv.org/abs/2204.00618) # Example test with Common Voice Dataset ```python dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) ```
Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-russian
Edresson
wav2vec2
14
8
transformers
2
automatic-speech-recognition
true
false
false
apache-2.0
['ru']
['Common Voice']
null
0
0
0
0
0
0
0
['audio', 'speech', 'wav2vec2', 'ru', 'russian-speech-corpus', 'automatic-speech-recognition', 'speech', 'PyTorch']
false
true
true
1,437
# Wav2vec2 Large 100k Voxpopuli fine-tuned with Common Voice and M-AILABS in Russian [Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) fine-tuned in Russian using the Common Voice 7.0 and M-AILABS. # Use this model ```python from transformers import AutoTokenizer, Wav2Vec2ForCTC tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-russian") model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-russian") ``` # Results For the results check the [paper](https://arxiv.org/abs/2204.00618) # Example test with Common Voice Dataset ```python dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) ```
Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-portuguese
Edresson
wav2vec2
15
9
transformers
2
automatic-speech-recognition
true
false
false
apache-2.0
['pt']
['Common Voice']
null
0
0
0
0
0
0
0
['audio', 'speech', 'wav2vec2', 'pt', 'Portuguese-speech-corpus', 'automatic-speech-recognition', 'speech', 'PyTorch']
false
true
true
1,633
# Wav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using the Common Voice 7.0, TTS-Portuguese Corpus plus data augmentation [Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) Wav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using the Common Voice 7.0, TTS-Portuguese plus data augmentation method based on TTS and voice conversion. # Use this model ```python from transformers import AutoTokenizer, Wav2Vec2ForCTC tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-portuguese") model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-portuguese") ``` # Results For the results check the [paper](https://arxiv.org/abs/2204.00618) # Example test with Common Voice Dataset ```python dataset = load_dataset("common_voice", "ru", split="test", data_dir="./cv-corpus-7.0-2021-07-21") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) ```
Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-russian
Edresson
wav2vec2
14
8
transformers
2
automatic-speech-recognition
true
false
false
apache-2.0
['pt']
['Common Voice']
null
0
0
0
0
0
0
0
['audio', 'speech', 'wav2vec2', 'pt', 'Russian-speech-corpus', 'automatic-speech-recognition', 'speech', 'PyTorch']
false
true
true
1,601
# Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0, MAILABS plus data augmentation [Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0, M-AILABS plus data augmentation method based on TTS and voice conversion. # Use this model ```python from transformers import AutoTokenizer, Wav2Vec2ForCTC tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-russian") model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-russian") ``` # Results For the results check the [paper](https://arxiv.org/abs/2204.00618) # Example test with Common Voice Dataset ```python dataset = load_dataset("common_voice", "ru", split="test", data_dir="./cv-corpus-7.0-2021-07-21") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) ```
Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-portuguese
Edresson
wav2vec2
14
12
transformers
1
automatic-speech-recognition
true
false
false
apache-2.0
['pt']
['Common Voice']
null
0
0
0
0
0
0
0
['audio', 'speech', 'wav2vec2', 'pt', 'portuguese-speech-corpus', 'automatic-speech-recognition', 'speech', 'PyTorch']
false
true
true
1,538
# Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Portuguese [Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) fine-tuned in Portuguese using a single-speaker dataset plus a data augmentation method based on TTS and voice conversion. # Use this model ```python from transformers import AutoTokenizer, Wav2Vec2ForCTC tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-portuguese") model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-portuguese") ``` # Results For the results check the [paper](https://arxiv.org/abs/2204.00618) # Example test with Common Voice Dataset ```python dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-7.0-2021-07-21") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) ```
Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-russian
Edresson
wav2vec2
14
5
transformers
2
automatic-speech-recognition
true
false
false
apache-2.0
['pt']
['Common Voice']
null
0
0
0
0
0
0
0
['audio', 'speech', 'wav2vec2', 'pt', 'Russian-speech-corpus', 'automatic-speech-recognition', 'speech', 'PyTorch']
false
true
true
1,525
# Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Russian [Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) fine-tuned in Russian using a single-speaker dataset plus a data augmentation method based on TTS and voice conversion. # Use this model ```python from transformers import AutoTokenizer, Wav2Vec2ForCTC tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-russian") model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-russian") ``` # Results For the results check the [paper](https://arxiv.org/abs/2204.00618) # Example test with Common Voice Dataset ```python dataset = load_dataset("common_voice", "ru", split="test", data_dir="./cv-corpus-7.0-2021-07-21") resampler = torchaudio.transforms.Resampl(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) ```
Edresson/wav2vec2-large-xlsr-coraa-portuguese
Edresson
wav2vec2
8
2,039
transformers
11
automatic-speech-recognition
true
false
false
apache-2.0
['pt']
['CORAA']
null
0
0
0
0
1
1
0
['audio', 'speech', 'wav2vec2', 'pt', 'portuguese-speech-corpus', 'automatic-speech-recognition', 'hf-asr-leaderboard', 'speech', 'PyTorch']
true
true
true
1,326
# Wav2vec 2.0 trained with CORAA Portuguese Dataset This a the demonstration of a fine-tuned Wav2vec model for Portuguese using the following [CORAA dataset](https://github.com/nilc-nlp/CORAA) # Use this model ```python from transformers import AutoTokenizer, Wav2Vec2ForCTC tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-xlsr-coraa-portuguese") model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-xlsr-coraa-portuguese") ``` # Results For the results check the [CORAA article](https://arxiv.org/abs/2110.15731) # Example test with Common Voice Dataset ```python dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) ```
Einmalumdiewelt/PegasusXSUM_GNAD
Einmalumdiewelt
pegasus
16
101
transformers
0
summarization
true
false
false
null
['de']
null
null
1
0
1
0
1
1
0
['generated_from_trainer', 'summarization']
true
true
true
1,117
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PegasusXSUM_GNAD This model is a fine-tuned version of [Einmalumdiewelt/PegasusXSUM_GNAD](https://huggingface.co/Einmalumdiewelt/PegasusXSUM_GNAD) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4386 - Rouge1: 26.7818 - Rouge2: 7.6864 - Rougel: 18.6264 - Rougelsum: 22.822 - Gen Len: 67.076 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.22.0.dev0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Einmalumdiewelt/T5-Base_GNAD
Einmalumdiewelt
t5
16
20,774
transformers
3
summarization
true
false
false
null
['de']
null
null
0
0
0
0
0
0
0
['generated_from_trainer', 'summarization']
true
true
true
1,107
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # T5-Base_GNAD This model is a fine-tuned version of [Einmalumdiewelt/T5-Base_GNAD](https://huggingface.co/Einmalumdiewelt/T5-Base_GNAD) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1025 - Rouge1: 27.5357 - Rouge2: 8.5623 - Rougel: 19.1508 - Rougelsum: 23.9029 - Gen Len: 52.7253 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.22.0.dev0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
EleutherAI/enformer-191k
EleutherAI
enformer
4
2
transformers
1
null
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,501
# Enformer Enformer model. It was introduced in the paper [Effective gene expression prediction from sequence by integrating long-range interactions.](https://www.nature.com/articles/s41592-021-01252-x) by Avsec et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/enformer). This particular model was trained on sequences of 196,608 basepairs, target length 896, with shift augmentation but without reverse complement, on poisson loss objective. Final human pearson R of ~0.45. This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the [enformer-pytorch repository](https://github.com/lucidrains/enformer-pytorch). Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence. We refer to the [paper](https://www.nature.com/articles/s41592-021-01252-x) published in Nature for details. ### How to use Refer to the README of [enformer-pytorch](https://github.com/lucidrains/enformer-pytorch) regarding usage. ### Citation info ``` Avsec, Ž., Agarwal, V., Visentin, D. et al. Effective gene expression prediction from sequence by integrating long-range interactions. Nat Methods 18, 1196–1203 (2021). https://doi.org/10.1038/s41592-021-01252-x ```
EleutherAI/enformer-191k_corr_coef_obj
EleutherAI
enformer
4
1
transformers
0
null
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,501
# Enformer Enformer model. It was introduced in the paper [Effective gene expression prediction from sequence by integrating long-range interactions.](https://www.nature.com/articles/s41592-021-01252-x) by Avsec et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/enformer). This particular model was trained on sequences of 196,608 basepairs, target length 896, with shift augmentation but without reverse complement, on poisson loss objective. Final human pearson R of ~0.49. This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the [enformer-pytorch repository](https://github.com/lucidrains/enformer-pytorch). Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence. We refer to the [paper](https://www.nature.com/articles/s41592-021-01252-x) published in Nature for details. ### How to use Refer to the README of [enformer-pytorch](https://github.com/lucidrains/enformer-pytorch) regarding usage. ### Citation info ``` Avsec, Ž., Agarwal, V., Visentin, D. et al. Effective gene expression prediction from sequence by integrating long-range interactions. Nat Methods 18, 1196–1203 (2021). https://doi.org/10.1038/s41592-021-01252-x ```
EleutherAI/enformer-corr_coef_obj
EleutherAI
enformer
4
1
transformers
0
null
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,473
# Enformer Enformer model. It was introduced in the paper [Effective gene expression prediction from sequence by integrating long-range interactions.](https://www.nature.com/articles/s41592-021-01252-x) by Avsec et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/enformer). This particular model was trained on sequences of 131,072 basepairs, target length 896 on v3-64 TPUs for 3 days with sequence augmentations and pearson correlation objective. This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the [enformer-pytorch repository](https://github.com/lucidrains/enformer-pytorch). Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence. We refer to the [paper](https://www.nature.com/articles/s41592-021-01252-x) published in Nature for details. ### How to use Refer to the README of [enformer-pytorch](https://github.com/lucidrains/enformer-pytorch) regarding usage. ### Citation info ``` Avsec, Ž., Agarwal, V., Visentin, D. et al. Effective gene expression prediction from sequence by integrating long-range interactions. Nat Methods 18, 1196–1203 (2021). https://doi.org/10.1038/s41592-021-01252-x ```
EleutherAI/enformer-preview
EleutherAI
enformer
4
1
transformers
2
null
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,462
# Enformer Enformer model. It was introduced in the paper [Effective gene expression prediction from sequence by integrating long-range interactions.](https://www.nature.com/articles/s41592-021-01252-x) by Avsec et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/enformer). This particular model was trained on sequences of 131,072 basepairs, target length 896 on v3-64 TPUs for 2 and a half days without augmentations and poisson loss. This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the [enformer-pytorch repository](https://github.com/lucidrains/enformer-pytorch). Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence. We refer to the [paper](https://www.nature.com/articles/s41592-021-01252-x) published in Nature for details. ### How to use Refer to the README of [enformer-pytorch](https://github.com/lucidrains/enformer-pytorch) regarding usage. ### Citation info ``` Avsec, Ž., Agarwal, V., Visentin, D. et al. Effective gene expression prediction from sequence by integrating long-range interactions. Nat Methods 18, 1196–1203 (2021). https://doi.org/10.1038/s41592-021-01252-x ```
EleutherAI/gpt-j-6B
EleutherAI
gptj
12
1,043,717
transformers
591
text-generation
true
true
true
apache-2.0
['en']
['the_pile']
null
5
0
1
4
9
3
6
['pytorch', 'causal-lm']
false
true
true
9,968
# GPT-J 6B ## Model Description GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters. <figure> | Hyperparameter | Value | |----------------------|------------| | \\(n_{parameters}\\) | 6053381344 | | \\(n_{layers}\\) | 28&ast; | | \\(d_{model}\\) | 4096 | | \\(d_{ff}\\) | 16384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50257/50400&dagger; (same tokenizer as GPT-2/3) | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | <figcaption><p><strong>&ast;</strong> Each layer consists of one feedforward block and one self attention block.</p> <p><strong>&dagger;</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure> The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. ## Training data GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai). ## Training procedure This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly. ## Intended Use and Limitations GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt. ### How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") ``` ### Limitations and Biases The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Evaluation results <figure> | Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) | |--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------| | Random Chance | &check; | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 | | GPT-3 Ada&ddagger; | &cross; | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- | | GPT-2 1.5B | &check; | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 | | GPT-Neo 1.3B&ddagger; | &check; | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 | | Megatron-2.5B&ast; | &cross; | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 | | GPT-Neo 2.7B&ddagger; | &check; | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 | | GPT-3 1.3B&ast;&ddagger; | &cross; | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 | | GPT-3 Babbage&ddagger; | &cross; | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- | | Megatron-8.3B&ast; | &cross; | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 | | GPT-3 2.7B&ast;&ddagger; | &cross; | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 | | Megatron-11B&dagger; | &check; | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 | | **GPT-J 6B&ddagger;** | **&check;** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** | | GPT-3 6.7B&ast;&ddagger; | &cross; | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 | | GPT-3 Curie&ddagger; | &cross; | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- | | GPT-3 13B&ast;&ddagger; | &cross; | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 | | GPT-3 175B&ast;&ddagger; | &cross; | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 | | GPT-3 Davinci&ddagger; | &cross; | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- | <figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p> <p><strong>&ast;</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more details.</p> <p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a> <a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>) Thus, evaluation was not attempted.</p> <p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure> ## Citation and Related Information ### BibTeX entry To cite this model: ```bibtex @misc{gpt-j, author = {Wang, Ben and Komatsuzaki, Aran}, title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` To cite the codebase that trained this model: ```bibtex @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email. ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha. Thanks to everyone who have helped out one way or another (listed alphabetically): - [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues. - [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package. - [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table. - [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo. - [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts. - [Janko Prester](https://github.com/jprester/) for creating the web demo frontend.
EleutherAI/gpt-neo-1.3B
EleutherAI
gpt_neo
10
154,247
transformers
117
text-generation
true
false
true
mit
['en']
['the_pile']
null
2
1
0
1
1
0
1
['text generation', 'pytorch', 'causal-lm']
false
true
true
4,477
# GPT-Neo 1.3B ## Model Description GPT-Neo 1.3B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 1.3B represents the number of parameters of this particular pre-trained model. ## Training data GPT-Neo 1.3B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. ## Training procedure This model was trained on the Pile for 380 billion tokens over 362,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss. ## Intended Use and Limitations This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B') >>> generator("EleutherAI has", do_sample=True, min_length=50) [{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Eval results ### Linguistic Reasoning | Model and Size | Pile BPB | Pile PPL | Wikitext PPL | Lambada PPL | Lambada Acc | Winogrande | Hellaswag | | ---------------- | ---------- | ---------- | ------------- | ----------- | ----------- | ---------- | ----------- | | **GPT-Neo 1.3B** | **0.7527** | **6.159** | **13.10** | **7.498** | **57.23%** | **55.01%** | **38.66%** | | GPT-2 1.5B | 1.0468 | ----- | 17.48 | 10.634 | 51.21% | 59.40% | 40.03% | | GPT-Neo 2.7B | 0.7165 | 5.646 | 11.39 | 5.626 | 62.22% | 56.50% | 42.73% | | GPT-3 Ada | 0.9631 | ----- | ----- | 9.954 | 51.60% | 52.90% | 35.93% | ### Physical and Scientific Reasoning | Model and Size | MathQA | PubMedQA | Piqa | | ---------------- | ---------- | ---------- | ----------- | | **GPT-Neo 1.3B** | **24.05%** | **54.40%** | **71.11%** | | GPT-2 1.5B | 23.64% | 58.33% | 70.78% | | GPT-Neo 2.7B | 24.72% | 57.54% | 72.14% | | GPT-3 Ada | 24.29% | 52.80% | 68.88% | ### Down-Stream Applications TBD ### BibTeX entry and citation info To cite this model, please use ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } @article{gao2020pile, title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others}, journal={arXiv preprint arXiv:2101.00027}, year={2020} } ```
EleutherAI/gpt-neo-125M
EleutherAI
gpt_neo
10
226,095
transformers
58
text-generation
true
false
true
mit
['en']
['the_pile']
null
1
0
1
0
1
0
1
['text generation', 'pytorch', 'causal-lm']
false
true
true
3,336
# GPT-Neo 125M ## Model Description GPT-Neo 125M is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 125M represents the number of parameters of this particular pre-trained model. ## Training data GPT-Neo 125M was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. ## Training procedure This model was trained on the Pile for 300 billion tokens over 572,300 steps. It was trained as a masked autoregressive language model, using cross-entropy loss. ## Intended Use and Limitations This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-125M') >>> generator("EleutherAI has", do_sample=True, min_length=20) [{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Eval results TBD ### Down-Stream Applications TBD ### BibTeX entry and citation info To cite this model, use ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } @article{gao2020pile, title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others}, journal={arXiv preprint arXiv:2101.00027}, year={2020} } ```
EleutherAI/gpt-neo-2.7B
EleutherAI
gpt_neo
10
253,464
transformers
225
text-generation
true
false
true
mit
['en']
['the_pile']
null
0
0
0
0
3
3
0
['text generation', 'pytorch', 'causal-lm']
false
true
true
4,899
# GPT-Neo 2.7B ## Model Description GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model. ## Training data GPT-Neo 2.7B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. ## Training procedure This model was trained for 420 billion tokens over 400,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss. ## Intended Use and Limitations This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B') >>> generator("EleutherAI has", do_sample=True, min_length=50) [{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Eval results All evaluations were done using our [evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness). Some results for GPT-2 and GPT-3 are inconsistent with the values reported in the respective papers. We are currently looking into why, and would greatly appreciate feedback and further testing of our eval harness. If you would like to contribute evaluations you have done, please reach out on our [Discord](https://discord.gg/vtRgjbM). ### Linguistic Reasoning | Model and Size | Pile BPB | Pile PPL | Wikitext PPL | Lambada PPL | Lambada Acc | Winogrande | Hellaswag | | ---------------- | ---------- | ---------- | ------------- | ----------- | ----------- | ---------- | ----------- | | GPT-Neo 1.3B | 0.7527 | 6.159 | 13.10 | 7.498 | 57.23% | 55.01% | 38.66% | | GPT-2 1.5B | 1.0468 | ----- | 17.48 | 10.634 | 51.21% | 59.40% | 40.03% | | **GPT-Neo 2.7B** | **0.7165** | **5.646** | **11.39** | **5.626** | **62.22%** | **56.50%** | **42.73%** | | GPT-3 Ada | 0.9631 | ----- | ----- | 9.954 | 51.60% | 52.90% | 35.93% | ### Physical and Scientific Reasoning | Model and Size | MathQA | PubMedQA | Piqa | | ---------------- | ---------- | ---------- | ----------- | | GPT-Neo 1.3B | 24.05% | 54.40% | 71.11% | | GPT-2 1.5B | 23.64% | 58.33% | 70.78% | | **GPT-Neo 2.7B** | **24.72%** | **57.54%** | **72.14%** | | GPT-3 Ada | 24.29% | 52.80% | 68.88% | ### Down-Stream Applications TBD ### BibTeX entry and citation info To cite this model, use ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } @article{gao2020pile, title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others}, journal={arXiv preprint arXiv:2101.00027}, year={2020} } ```
Elron/bleurt-base-128
Elron
bert
8
58
transformers
1
text-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
999
\n## BLEURT Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224). ## Usage Example ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-base-128") model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-base-128") model.eval() references = ["hello world", "hello world"] candidates = ["hi universe", "bye world"] with torch.no_grad(): scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze() print(scores) # tensor([0.3598, 0.0723]) ```
Elron/bleurt-base-512
Elron
bert
8
432
transformers
1
text-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
999
\n## BLEURT Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224). ## Usage Example ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-base-512") model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-base-512") model.eval() references = ["hello world", "hello world"] candidates = ["hi universe", "bye world"] with torch.no_grad(): scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze() print(scores) # tensor([1.0327, 0.2055]) ```
Elron/bleurt-large-128
Elron
bert
8
39
transformers
1
text-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
1,003
\n## BLEURT Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224). ## Usage Example ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-large-128") model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-large-128") model.eval() references = ["hello world", "hello world"] candidates = ["hi universe", "bye world"] with torch.no_grad(): scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze() print(scores) # tensor([ 0.0020, -0.6647]) ```
Elron/bleurt-large-512
Elron
bert
8
27,744
transformers
1
text-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
998
## BLEURT Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224). ## Usage Example ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-large-512") model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-large-512") model.eval() references = ["hello world", "hello world"] candidates = ["hi universe", "bye world"] with torch.no_grad(): scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze() print(scores) # tensor([0.9877, 0.0475]) ```
Elron/bleurt-tiny-128
Elron
bert
8
17
transformers
1
text-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
1,001
\n## BLEURT Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224). ## Usage Example ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-tiny-512") model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-tiny-512") model.eval() references = ["hello world", "hello world"] candidates = ["hi universe", "bye world"] with torch.no_grad(): scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze() print(scores) # tensor([-1.0563, -0.3004]) ```
Elron/bleurt-tiny-512
Elron
bert
8
64,801
transformers
2
text-classification
true
false
false
null
null
null
null
1
0
1
0
0
0
0
['text-classification', 'bert']
false
true
true
4,596
# Model Card for bleurt-tiny-512 # Model Details ## Model Description Pytorch version of the original BLEURT models from ACL paper - **Developed by:** Elron Bandel, Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research - **Shared by [Optional]:** Elron Bandel - **Model type:** Text Classification - **Language(s) (NLP):** More information needed - **License:** More information needed - **Parent Model:** BERT - **Resources for more information:** - [GitHub Repo](https://github.com/google-research/bleurt/tree/master) - [Associated Paper](https://aclanthology.org/2020.acl-main.704/) - [Blog Post](https://ai.googleblog.com/2020/05/evaluating-natural-language-generation.html) # Uses ## Direct Use This model can be used for the task of Text Classification ## Downstream Use [Optional] More information needed. ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data The model authors note in the [associated paper](https://aclanthology.org/2020.acl-main.704.pdf): > We use years 2017 to 2019 of the WMT Metrics Shared Task, to-English language pairs. For each year, we used the of- ficial WMT test set, which include several thou- sand pairs of sentences with human ratings from the news domain. The training sets contain 5,360, 9,492, and 147,691 records for each year. ## Training Procedure ### Preprocessing More information needed ### Speeds, Sizes, Times More information needed # Evaluation ## Testing Data, Factors & Metrics ### Testing Data The test sets for years 2018 and 2019 [of the WMT Metrics Shared Task, to-English language pairs.] are noisier, ### Factors More information needed ### Metrics More information needed ## Results More information needed # Model Examination More information needed # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware More information needed ### Software More information needed. # Citation **BibTeX:** ```bibtex @inproceedings{sellam2020bleurt, title = {BLEURT: Learning Robust Metrics for Text Generation}, author = {Thibault Sellam and Dipanjan Das and Ankur P Parikh}, year = {2020}, booktitle = {Proceedings of ACL} } ``` # Glossary [optional] More information needed # More Information [optional] More information needed # Model Card Authors [optional] Elron Bandel in collaboration with Ezi Ozoani and the Hugging Face team # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-tiny-512") model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-tiny-512") model.eval() references = ["hello world", "hello world"] candidates = ["hi universe", "bye world"] with torch.no_grad(): scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze() print(scores) # tensor([-0.9414, -0.5678]) ``` See [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) for model conversion code. </details>
Emanuel/autonlp-pos-tag-bosque
Emanuel
bert
9
37
transformers
2
token-classification
true
false
false
null
['pt']
['Emanuel/autonlp-data-pos-tag-bosque']
6.2107269129101805
0
0
0
0
0
0
0
autonlp
false
true
true
1,106
# Model Trained Using AutoNLP - Problem type: Entity Extraction - Model ID: 21124427 - CO2 Emissions (in grams): 6.2107269129101805 ## Validation Metrics - Loss: 0.09813392907381058 - Accuracy: 0.9714309035997062 - Precision: 0.9721275936822545 - Recall: 0.9735345807918949 - F1: 0.9728305785123967 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Emanuel/autonlp-pos-tag-bosque-21124427 ``` Or Python API: ``` from transformers import AutoModelForTokenClassification, AutoTokenizer model = AutoModelForTokenClassification.from_pretrained("Emanuel/autonlp-pos-tag-bosque") tokenizer = AutoTokenizer.from_pretrained("Emanuel/autonlp-pos-tag-bosque") inputs = tokenizer("A noiva casa de branco", return_tensors="pt") outputs = model(**inputs) labelids = outputs.logits.squeeze().argmax(axis=-1) labels = [model.config.id2label[int(x)] for x in labelids] labels = labels[1:-1]# Filter start and end of sentence symbols ```
Emanuel/bertweet-emotion-base
Emanuel
roberta
21
1,387
transformers
1
text-classification
true
false
false
apache-2.0
null
['emotion']
null
3
0
3
0
0
0
0
['generated_from_trainer']
true
true
true
519
# bertweet-emotion-base This model is a fine-tuned version of [Bertweet](https://huggingface.co/vinai/bertweet-base). It achieves the following results on the evaluation set: - Loss: 0.1172 - Accuracy: 0.945 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 80 - eval_batch_size: 80 - lr_scheduler_type: linear - num_epochs: 6.0 ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu113 - Datasets 1.15.1 - Tokenizers 0.10.3
Emanuel/roebrta-base-val-test
Emanuel
roberta
14
2
transformers
0
fill-mask
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,082
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # language-modeling This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4229 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: tpu - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.8.1+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
Emanuel/twitter-emotion-deberta-v3-base
Emanuel
deberta-v2
17
55
transformers
1
text-classification
true
false
false
apache-2.0
null
['emotion']
null
3
0
3
0
0
0
0
['generated_from_trainer']
true
true
true
537
# twitter-emotion-deberta-v3-base This model is a fine-tuned version of [DeBERTa-v3](https://huggingface.co/microsoft/deberta-v3-base). It achieves the following results on the evaluation set: - Loss: 0.1474 - Accuracy: 0.937 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 80 - eval_batch_size: 80 - lr_scheduler_type: linear - num_epochs: 6.0 ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu113 - Datasets 1.15.1 - Tokenizers 0.10.3
Emmanuel/bert-finetuned-ner
Emmanuel
bert
14
5
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,519
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0603 - Precision: 0.9317 - Recall: 0.9510 - F1: 0.9413 - Accuracy: 0.9866 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0872 | 1.0 | 1756 | 0.0660 | 0.9152 | 0.9350 | 0.9250 | 0.9827 | | 0.0386 | 2.0 | 3512 | 0.0579 | 0.9374 | 0.9498 | 0.9436 | 0.9864 | | 0.0225 | 3.0 | 5268 | 0.0603 | 0.9317 | 0.9510 | 0.9413 | 0.9866 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
EngNada/wav2vec2-large-xlsr-53-demo-colab
EngNada
wav2vec2
13
6
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,318
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 7.9807 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 22.8021 | 1.78 | 80 | 7.9807 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
EnsarEmirali/distilbert-base-uncased-finetuned-emotion
EnsarEmirali
distilbert
12
6
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,339
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2131 - Accuracy: 0.9265 - F1: 0.9269 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8031 | 1.0 | 250 | 0.2973 | 0.9125 | 0.9110 | | 0.2418 | 2.0 | 500 | 0.2131 | 0.9265 | 0.9269 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.1 - Datasets 1.16.1 - Tokenizers 0.10.3
ErykWdowiak/GPTalian
ErykWdowiak
gpt2
15
8
transformers
0
text-generation
true
false
true
apache-2.0
['en', 'it', 'scn', 'nap']
null
null
1
1
0
0
0
0
0
['exbert', 'gpt2']
false
true
true
610
# GPTalian This is a GPT2 model of Italian regional languages trained on [collections of Italian "dialect poetry"](http://dialectpoetry.com) by Luigi Bonaffini. This is a multilingual model. Italians use the word "dialect" to describe their regional languages, but they are separate languages. And there's a lot of English in this dataset too. The challenge of this project is to train a model to write the languages of Italy. For those who do not know Italian, here's some (lowercase) text that you can type into the API box: - oggi si parla il dialetto - la sua poesia viene di - ma non sempre trova
Evgeneus/distilbert-base-uncased-finetuned-ner
Evgeneus
distilbert
15
4
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,372
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0845 - Precision: 0.8754 - Recall: 0.9058 - F1: 0.8904 - Accuracy: 0.9763 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2529 | 1.0 | 878 | 0.0845 | 0.8754 | 0.9058 | 0.8904 | 0.9763 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
Eyvaz/wav2vec2-base-russian-big-kaggle
Eyvaz
wav2vec2
12
8
transformers
1
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,072
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-russian-big-kaggle This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.13.3 - Tokenizers 0.10.3
Eyvaz/wav2vec2-base-russian-demo-kaggle
Eyvaz
wav2vec2
22
7
transformers
1
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,047
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-russian-demo-kaggle This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: inf - Wer: 0.9997 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.0102 | 1.03 | 500 | inf | 0.9997 | | 0.0068 | 2.06 | 1000 | inf | 0.9997 | | 0.0 | 3.09 | 1500 | inf | 0.9997 | | 0.0313 | 4.12 | 2000 | inf | 0.9997 | | 0.0 | 5.15 | 2500 | inf | 0.9997 | | 0.0052 | 6.19 | 3000 | inf | 0.9997 | | 0.0287 | 7.22 | 3500 | inf | 0.9997 | | 0.0 | 8.25 | 4000 | inf | 0.9997 | | 0.01 | 9.28 | 4500 | inf | 0.9997 | | 0.0 | 10.31 | 5000 | inf | 0.9997 | | 0.3919 | 11.34 | 5500 | inf | 0.9997 | | 0.0 | 12.37 | 6000 | inf | 0.9997 | | 0.0 | 13.4 | 6500 | inf | 0.9997 | | 0.0 | 14.43 | 7000 | inf | 0.9997 | | 0.6422 | 15.46 | 7500 | inf | 0.9997 | | 0.0 | 16.49 | 8000 | inf | 0.9997 | | 0.0 | 17.53 | 8500 | inf | 0.9997 | | 0.0 | 18.56 | 9000 | inf | 0.9997 | | 0.0 | 19.59 | 9500 | inf | 0.9997 | | 0.0 | 20.62 | 10000 | inf | 0.9997 | | 0.0427 | 21.65 | 10500 | inf | 0.9997 | | 0.0 | 22.68 | 11000 | inf | 0.9997 | | 0.0 | 23.71 | 11500 | inf | 0.9997 | | 0.0 | 24.74 | 12000 | inf | 0.9997 | | 0.0091 | 25.77 | 12500 | inf | 0.9997 | | 0.1243 | 26.8 | 13000 | inf | 0.9997 | | 0.0 | 27.83 | 13500 | inf | 0.9997 | | 0.0 | 28.87 | 14000 | inf | 0.9997 | | 0.0 | 29.9 | 14500 | inf | 0.9997 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.13.3 - Tokenizers 0.10.3
FOFer/distilbert-base-uncased-finetuned-squad
FOFer
distilbert
12
3
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad_v2']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,288
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.4306 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2169 | 1.0 | 8235 | 1.1950 | | 0.9396 | 2.0 | 16470 | 1.2540 | | 0.7567 | 3.0 | 24705 | 1.4306 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
FabioDataGeek/distilbert-base-uncased-finetuned-emotion
FabioDataGeek
distilbert
14
6
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,343
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2196 - Accuracy: 0.926 - F1: 0.9258 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8279 | 1.0 | 250 | 0.3208 | 0.9025 | 0.8979 | | 0.2538 | 2.0 | 500 | 0.2196 | 0.926 | 0.9258 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Fan-s/reddit-tc-bert
Fan-s
bert
10
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
false
true
true
2,140
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-uncased-base This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an Reddit-dialogue dataset. This model can be used for Text Classification: Given two sentences, see if they are related. It achieves the following results on the evaluation set: - Loss: 0.2297 - Accuracy: 0.9267 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 320 - eval_batch_size: 80 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.0 - Tokenizers 0.11.0 ## Usage (HuggingFace Transformers) You can use the model like this: ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer # label_list label_list = ['matched', 'unmatched'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained("Fan-s/reddit-tc-bert", use_fast=True) model = AutoModelForSequenceClassification.from_pretrained("Fan-s/reddit-tc-bert") # Set the input post = "don't make gravy with asbestos." response = "i'd expect someone with a culinary background to know that. since we're talking about school dinner ladies, they need to learn this pronto." # Predict whether the two sentences are matched def predict(post, response, max_seq_length=128): with torch.no_grad(): args = (post, response) input = tokenizer(*args, padding="max_length", max_length=max_seq_length, truncation=True, return_tensors="pt") output = model(**input) logits = output.logits item = torch.argmax(logits, dim=1) predict_label = label_list[item] return predict_label, logits predict_label, logits = predict(post, response) # Matched print("predict_label:", predict_label) ```
FardinSaboori/bert-finetuned-squad
FardinSaboori
bert
12
3
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
955
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
FarisHijazi/wav2vec2-large-xls-r-300m-turkish-colab
FarisHijazi
wav2vec2
11
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,108
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 256 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.13.3 - Tokenizers 0.10.3
Fauzan/autonlp-judulberita-32517788
Fauzan
bert
9
1
transformers
0
text-classification
true
false
false
null
['unk']
['Fauzan/autonlp-data-judulberita']
0.9413042739759596
0
0
0
0
0
0
0
autonlp
false
true
true
1,005
# Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 32517788 - CO2 Emissions (in grams): 0.9413042739759596 ## Validation Metrics - Loss: 0.32112351059913635 - Accuracy: 0.8641304347826086 - Precision: 0.8055555555555556 - Recall: 0.8405797101449275 - AUC: 0.9493383742911153 - F1: 0.8226950354609929 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Fauzan/autonlp-judulberita-32517788 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Fauzan/autonlp-judulberita-32517788", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Fauzan/autonlp-judulberita-32517788", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
Fengkai/distilbert-base-uncased-finetuned-emotion
Fengkai
distilbert
21
27
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,563
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1495 - Accuracy: 0.9385 - F1: 0.9383 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.1739 | 1.0 | 250 | 0.1827 | 0.931 | 0.9302 | | 0.1176 | 2.0 | 500 | 0.1567 | 0.9325 | 0.9326 | | 0.0994 | 3.0 | 750 | 0.1555 | 0.9385 | 0.9389 | | 0.08 | 4.0 | 1000 | 0.1496 | 0.9445 | 0.9443 | | 0.0654 | 5.0 | 1250 | 0.1495 | 0.9385 | 0.9383 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
Ferch423/gpt2-small-portuguese-wikipediabio
Ferch423
gpt2
11
8
transformers
0
text-generation
true
false
true
null
['pt']
['wikipedia']
null
0
0
0
0
0
0
0
['pt', 'wikipedia', 'gpt2', 'finetuning']
false
true
true
398
# GPT2-SMALL-PORTUGUESE-WIKIPEDIABIO This is a finetuned model version of gpt2-small-portuguese(https://huggingface.co/pierreguillou/gpt2-small-portuguese) by pierreguillou. It was trained on a person abstract dataset extracted from DBPEDIA (over 100000 people's abstracts). The model is intended as a simple and fun experiment for generating texts abstracts based on ordinary people's names.
Fhrozen/test_an4
Fhrozen
null
31
1
espnet
0
automatic-speech-recognition
false
false
false
cc-by-4.0
['en']
['an4']
null
0
0
0
0
0
0
0
['espnet', 'audio', 'automatic-speech-recognition']
false
true
true
7,699
## ESPnet2 ASR model ### `Fhrozen/test_an4` This model was trained by Fhrozen using an4 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout b8df4c928e132acff78d196988bdb68a66987952 pip install -e . cd egs2/an4/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model Fhrozen/test_an4 ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Wed Oct 20 00:00:46 JST 2021` - python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]` - espnet version: `espnet 0.10.4a1` - pytorch version: `pytorch 1.9.0` - Git hash: `b8df4c928e132acff78d196988bdb68a66987952` - Commit date: `Tue Oct 19 07:48:11 2021 -0400` ## asr_train_raw_en_bpe30 ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/test|130|773|4.0|22.3|73.7|0.1|96.1|100.0| |inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/train_dev|100|591|2.7|21.8|75.5|0.0|97.3|100.0| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/test|130|2565|17.2|16.4|66.4|1.0|83.8|100.0| |inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/train_dev|100|1915|15.5|16.4|68.1|0.9|85.5|100.0| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/test|130|2695|21.1|15.6|63.3|0.9|79.9|100.0| |inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/train_dev|100|2015|19.4|15.6|65.0|0.9|81.5|100.0| ## ASR config <details><summary>expand</summary> ``` config: null print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_raw_en_bpe30 ngpu: 0 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: null dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 40 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - loss - min - - valid - loss - min - - train - acc - max - - valid - acc - max keep_nbest_models: - 10 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_bpe30/train/speech_shape - exp/asr_stats_raw_en_bpe30/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_en_bpe30/valid/speech_shape - exp/asr_stats_raw_en_bpe30/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_nodev/wav.scp - speech - sound - - dump/raw/train_nodev/text - text - text valid_data_path_and_name_and_type: - - dump/raw/train_dev/wav.scp - speech - sound - - dump/raw/train_dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adadelta optim_conf: {} scheduler: null scheduler_conf: {} token_list: - <blank> - <unk> - ▁ - T - E - O - R - Y - A - H - U - S - I - F - B - L - P - D - G - M - C - V - X - J - K - Z - W - N - Q - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true model_conf: ctc_weight: 0.5 ignore_id: -1 lsm_weight: 0.0 length_normalized_loss: false report_cer: true report_wer: true sym_space: <space> sym_blank: <blank> extract_feats_in_collect_stats: true use_preprocessor: true token_type: bpe bpemodel: data/en_token_list/bpe_unigram30/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: null specaug_conf: {} normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_en_bpe30/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: rnn encoder_conf: {} postencoder: null postencoder_conf: {} decoder: rnn decoder_conf: {} required: - output_dir - token_list version: 0.10.4a1 distributed: false ``` </details> ## LM config <details><summary>expand</summary> ``` config: conf/train_lm.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/lm_train_lm_en_bpe30 ngpu: 0 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: null dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 40 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min keep_nbest_models: 1 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 256 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/lm_stats_en_bpe30/train/text_shape.bpe valid_shape_file: - exp/lm_stats_en_bpe30/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/lm_train.txt - text - text valid_data_path_and_name_and_type: - - dump/raw/train_dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.1 scheduler: null scheduler_conf: {} token_list: - <blank> - <unk> - ▁ - T - E - O - R - Y - A - H - U - S - I - F - B - L - P - D - G - M - C - V - X - J - K - Z - W - N - Q - <sos/eos> init: null model_conf: ignore_id: 0 use_preprocessor: true token_type: bpe bpemodel: data/en_token_list/bpe_unigram30/bpe.model non_linguistic_symbols: null cleaner: null g2p: null lm: seq_rnn lm_conf: unit: 650 nlayers: 2 required: - output_dir - token_list version: 0.10.4a1 distributed: false ``` </details>
Fiddi/distilbert-base-uncased-finetuned-ner
Fiddi
distilbert
19
9
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,555
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0604 - Precision: 0.9291 - Recall: 0.9376 - F1: 0.9333 - Accuracy: 0.9841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2412 | 1.0 | 878 | 0.0688 | 0.9178 | 0.9246 | 0.9212 | 0.9815 | | 0.0514 | 2.0 | 1756 | 0.0608 | 0.9251 | 0.9344 | 0.9298 | 0.9832 | | 0.0304 | 3.0 | 2634 | 0.0604 | 0.9291 | 0.9376 | 0.9333 | 0.9841 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
Fidlobabovic/beta-kvantorium-simple-small
Fidlobabovic
roberta
11
5
transformers
0
fill-mask
true
false
true
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
487
Beta-kavntorium-simple-small is a transformers model RoBerta pretrained on a large corpus of Russion kvantorim data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with objective: Automate communication with the Quantorium community and mentors.
Fidlobabovic/beta-kvantorium-small
Fidlobabovic
roberta
8
5
transformers
0
fill-mask
true
false
true
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
651
Beta-kavntorium-simple-small is a transformers model RoBerta pretrained on a large corpus of Russion kvantorim data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with objective: Automate communication with the Quantorium community and mentors. https://sun9-49.userapi.com/impg/CIJZKA_r9xoLYd47Lvjv_8jyu6epadPyergP3Q/zw3J_E6IlJo.jpg?size=546x385&quality=96&sign=139fa29b864d36958feab4731cc684dc&type=album
Finnish-NLP/convbert-base-finnish
Finnish-NLP
convbert
15
0
transformers
1
feature-extraction
true
true
false
apache-2.0
['fi']
['Finnish-NLP/mc4_fi_cleaned', 'wikipedia']
null
0
0
0
0
0
0
0
['finnish', 'convbert']
false
true
true
7,847
# ConvBERT for Finnish Pretrained ConvBERT model on Finnish language using a replaced token detection (RTD) objective. ConvBERT was introduced in [this paper](https://arxiv.org/abs/2008.02496) and first released at [this page](https://github.com/yitu-opensource/ConvBert). **Note**: this model is the ConvBERT discriminator model intented to be used for fine-tuning on downstream tasks like text classification. The ConvBERT generator model intented to be used for fill-mask task is released here [Finnish-NLP/convbert-base-generator-finnish](https://huggingface.co/Finnish-NLP/convbert-base-generator-finnish) ## Model description Finnish ConvBERT is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN). This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ConvBERT model as inputs. Compared to BERT and ELECTRA models, ConvBERT model utilizes a span-based dynamic convolution to replace some of the global self-attention heads for modeling local input sequence dependencies. These convolution heads, together with the rest of the self-attention heads, form a new mixed attention block that should be more efficient at both global and local context learning. ## Intended uses & limitations You can use the raw model for extracting features or fine-tune it to a downstream task like text classification. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import ConvBertTokenizer, ConvBertModel import torch tokenizer = ConvBertTokenizer.from_pretrained("Finnish-NLP/convbert-base-finnish") model = ConvBertModel.from_pretrained("Finnish-NLP/convbert-base-finnish") inputs = tokenizer("Joka kuuseen kurkottaa, se katajaan kapsahtaa", return_tensors="pt") outputs = model(**inputs) print(outputs.last_hidden_state) ``` and in TensorFlow: ```python from transformers import ConvBertTokenizer, TFConvBertModel tokenizer = ConvBertTokenizer.from_pretrained("Finnish-NLP/convbert-base-finnish") model = TFConvBertModel.from_pretrained("Finnish-NLP/convbert-base-finnish") inputs = tokenizer("Joka kuuseen kurkottaa, se katajaan kapsahtaa", return_tensors="tf") outputs = model(inputs) print(outputs.last_hidden_state) ``` ### Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. ## Training data This Finnish ConvBERT model was pretrained on the combination of five datasets: - [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo). - [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset - [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501) - [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001) - [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803) Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. ## Training procedure ### Preprocessing The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. ### Pretraining The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1M steps. The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after. Training code was from the official [ConvBERT repository](https://github.com/yitu-opensource/ConvBert) and also some instructions was used from [here](https://github.com/stefan-it/turkish-bert/blob/master/convbert/CHEATSHEET.md). ## Evaluation results Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length. When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model and to our other models: | | Average | Yle News 128 length | Yle News 512 length | Eduskunta 128 length | |-----------------------------------------------|----------|---------------------|---------------------|----------------------| |Finnish-NLP/convbert-base-finnish |86.98 |94.04 |95.02 |71.87 | |Finnish-NLP/electra-base-discriminator-finnish |86.25 |93.78 |94.77 |70.20 | |Finnish-NLP/roberta-large-wechsel-finnish |88.19 |**94.91** |95.18 |74.47 | |Finnish-NLP/roberta-large-finnish-v2 |88.17 |94.46 |95.22 |74.83 | |Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 | |TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |94.90 |**95.49** |**76.07** | To conclude, this ConvBERT model wins the ELECTRA model while losing to other models but is still fairly competitive compared to our roberta-large models when taking into account that this ConvBERT model has 106M parameters when roberta-large models have 355M parameters. ConvBERT winning the ELECTRA is also in line with the findings of the [ConvBERT paper](https://arxiv.org/abs/2008.02496). ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). ## Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) Feel free to contact us for more details 🤗
Finnish-NLP/convbert-base-generator-finnish
Finnish-NLP
convbert
9
3
transformers
0
fill-mask
true
false
false
apache-2.0
['fi']
['Finnish-NLP/mc4_fi_cleaned', 'wikipedia']
null
0
0
0
0
0
0
0
['finnish', 'convbert']
false
true
true
6,090
# ConvBERT for Finnish Pretrained ConvBERT model on Finnish language using a replaced token detection (RTD) objective. ConvBERT was introduced in [this paper](https://arxiv.org/abs/2008.02496) and first released at [this page](https://github.com/yitu-opensource/ConvBert). **Note**: this model is the ConvBERT generator model intented to be used for the fill-mask task. The ConvBERT discriminator model intented to be used for fine-tuning on downstream tasks like text classification is released here [Finnish-NLP/convbert-base-finnish](https://huggingface.co/Finnish-NLP/convbert-base-finnish) ## Model description Finnish ConvBERT is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN). This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ConvBERT model as inputs. Compared to BERT and ELECTRA models, ConvBERT model utilizes a span-based dynamic convolution to replace some of the global self-attention heads for modeling local input sequence dependencies. These convolution heads, together with the rest of the self-attention heads, form a new mixed attention block that should be more efficient at both global and local context learning. ## Intended uses & limitations You can use this generator model mainly just for the fill-mask task. For other tasks, check the [Finnish-NLP/convbert-base-finnish](https://huggingface.co/Finnish-NLP/convbert-base-finnish) model instead. ### How to use Here is how to use this model directly with a pipeline for fill-mask task: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Finnish-NLP/convbert-base-generator-finnish') >>> unmasker("Moikka olen [MASK] kielimalli.") [{'score': 0.08341152966022491, 'token': 4619, 'token_str': 'suomalainen', 'sequence': 'Moikka olen suomalainen kielimalli.'}, {'score': 0.02831297740340233, 'token': 25583, 'token_str': 'ranskalainen', 'sequence': 'Moikka olen ranskalainen kielimalli.'}, {'score': 0.027857203036546707, 'token': 37714, 'token_str': 'kiinalainen', 'sequence': 'Moikka olen kiinalainen kielimalli.'}, {'score': 0.027701903134584427, 'token': 21614, 'token_str': 'ruotsalainen', 'sequence': 'Moikka olen ruotsalainen kielimalli.'}, {'score': 0.026388710364699364, 'token': 591, 'token_str': 'hyvä', 'sequence': 'Moikka olen hyvä kielimalli.'}] ``` ### Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. ## Training data This Finnish ConvBERT model was pretrained on the combination of five datasets: - [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo). - [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset - [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501) - [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001) - [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803) Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. ## Training procedure ### Preprocessing The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. ### Pretraining The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1M steps. The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after. Training code was from the official [ConvBERT repository](https://github.com/yitu-opensource/ConvBert) and also some instructions was used from [here](https://github.com/stefan-it/turkish-bert/blob/master/convbert/CHEATSHEET.md). ## Evaluation results For evaluation results, check the [Finnish-NLP/convbert-base-finnish](https://huggingface.co/Finnish-NLP/convbert-base-finnish) model repository instead. ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). ## Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) Feel free to contact us for more details 🤗
Finnish-NLP/electra-base-discriminator-finnish
Finnish-NLP
electra
12
2
transformers
1
null
true
false
false
apache-2.0
['fi']
['Finnish-NLP/mc4_fi_cleaned', 'wikipedia']
null
0
0
0
0
0
0
0
['finnish', 'electra']
false
true
true
7,368
# ELECTRA for Finnish Pretrained ELECTRA model on Finnish language using a replaced token detection (RTD) objective. ELECTRA was introduced in [this paper](https://openreview.net/pdf?id=r1xMH1BtvB) and first released at [this page](https://github.com/google-research/electra). **Note**: this model is the ELECTRA discriminator model intented to be used for fine-tuning on downstream tasks like text classification. The ELECTRA generator model intented to be used for fill-mask task is released here [Finnish-NLP/electra-base-generator-finnish](https://huggingface.co/Finnish-NLP/electra-base-generator-finnish) ## Model description Finnish ELECTRA is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN). This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ELECTRA model as inputs. ## Intended uses & limitations You can use the raw model for extracting features or fine-tune it to a downstream task like text classification. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import ElectraTokenizer, ElectraModel import torch tokenizer = ElectraTokenizer.from_pretrained("Finnish-NLP/electra-base-discriminator-finnish") model = ElectraModel.from_pretrained("Finnish-NLP/electra-base-discriminator-finnish") inputs = tokenizer("Joka kuuseen kurkottaa, se katajaan kapsahtaa", return_tensors="pt") outputs = model(**inputs) print(outputs.last_hidden_state) ``` and in TensorFlow: ```python from transformers import ElectraTokenizer, TFElectraModel tokenizer = ElectraTokenizer.from_pretrained("Finnish-NLP/electra-base-discriminator-finnish") model = TFElectraModel.from_pretrained("Finnish-NLP/electra-base-discriminator-finnish", from_pt=True) inputs = tokenizer("Joka kuuseen kurkottaa, se katajaan kapsahtaa", return_tensors="tf") outputs = model(inputs) print(outputs.last_hidden_state) ``` ### Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. ## Training data This Finnish ELECTRA model was pretrained on the combination of five datasets: - [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo). - [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset - [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501) - [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001) - [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803) Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. ## Training procedure ### Preprocessing The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. ### Pretraining The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1M steps. The optimizer used was a AdamW with learning rate 2e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after. Training code was from the official [ELECTRA repository](https://github.com/google-research/electra) and also some instructions was used from [here](https://github.com/stefan-it/turkish-bert/blob/master/electra/CHEATSHEET.md). ## Evaluation results Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length. When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model and to our other models: | | Average | Yle News 128 length | Yle News 512 length | Eduskunta 128 length | |-----------------------------------------------|----------|---------------------|---------------------|----------------------| |Finnish-NLP/electra-base-discriminator-finnish |86.25 |93.78 |94.77 |70.20 | |Finnish-NLP/convbert-base-finnish |86.98 |94.04 |95.02 |71.87 | |Finnish-NLP/roberta-large-wechsel-finnish |88.19 |**94.91** |95.18 |74.47 | |Finnish-NLP/roberta-large-finnish-v2 |88.17 |94.46 |95.22 |74.83 | |Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 | |TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |94.90 |**95.49** |**76.07** | To conclude, this ELECTRA model loses to other models but is still fairly competitive compared to our roberta-large models when taking into account that this ELECTRA model has 110M parameters when roberta-large models have 355M parameters. ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). ## Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) Feel free to contact us for more details 🤗
Finnish-NLP/electra-base-generator-finnish
Finnish-NLP
electra
8
4
transformers
0
fill-mask
true
false
false
apache-2.0
['fi']
['Finnish-NLP/mc4_fi_cleaned', 'wikipedia']
null
0
0
0
0
0
0
0
['finnish', 'electra']
false
true
true
5,754
# ELECTRA for Finnish Pretrained ELECTRA model on Finnish language using a replaced token detection (RTD) objective. ELECTRA was introduced in [this paper](https://openreview.net/pdf?id=r1xMH1BtvB) and first released at [this page](https://github.com/google-research/electra). **Note**: this model is the ELECTRA generator model intented to be used for the fill-mask task. The ELECTRA discriminator model intented to be used for fine-tuning on downstream tasks like text classification is released here [Finnish-NLP/electra-base-discriminator-finnish](https://huggingface.co/Finnish-NLP/electra-base-discriminator-finnish) ## Model description Finnish ELECTRA is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN). This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ELECTRA model as inputs. ## Intended uses & limitations You can use this generator model mainly just for the fill-mask task. For other tasks, check the [Finnish-NLP/electra-base-discriminator-finnish](https://huggingface.co/Finnish-NLP/electra-base-discriminator-finnish) model instead. ### How to use Here is how to use this model directly with a pipeline for fill-mask task: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Finnish-NLP/electra-base-generator-finnish') >>> unmasker("Moikka olen [MASK] kielimalli.") [{'score': 0.0708453431725502, 'token': 4619, 'token_str': 'suomalainen', 'sequence': 'Moikka olen suomalainen kielimalli.'}, {'score': 0.042563650757074356, 'token': 1153, 'token_str': 'uusi', 'sequence': 'Moikka olen uusi kielimalli.'}, {'score': 0.03219178691506386, 'token': 591, 'token_str': 'hyvä', 'sequence': 'Moikka olen hyvä kielimalli.'}, {'score': 0.03175133094191551, 'token': 3134, 'token_str': 'vanha', 'sequence': 'Moikka olen vanha kielimalli.'}, {'score': 0.019662367179989815, 'token': 25583, 'token_str': 'ranskalainen', 'sequence': 'Moikka olen ranskalainen kielimalli.'}] ``` ### Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. ## Training data This Finnish ELECTRA model was pretrained on the combination of five datasets: - [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo). - [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset - [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501) - [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001) - [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803) Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. ## Training procedure ### Preprocessing The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. ### Pretraining The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1M steps. The optimizer used was a AdamW with learning rate 2e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after. Training code was from the official [ELECTRA repository](https://github.com/google-research/electra) and also some instructions was used from [here](https://github.com/stefan-it/turkish-bert/blob/master/electra/CHEATSHEET.md). ## Evaluation results For evaluation results, check the [Finnish-NLP/electra-base-discriminator-finnish](https://huggingface.co/Finnish-NLP/electra-base-discriminator-finnish) model repository instead. ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). ## Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) Feel free to contact us for more details 🤗
Finnish-NLP/gpt2-finnish
Finnish-NLP
gpt2
18
140
transformers
1
text-generation
true
false
true
apache-2.0
['fi']
['Finnish-NLP/mc4_fi_cleaned', 'wikipedia']
null
0
0
0
0
0
0
0
['finnish', 'gpt2']
false
true
true
7,837
# GPT-2 for Finnish Pretrained GPT-2 model on Finnish language using a causal language modeling (CLM) objective. GPT-2 was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). **Note**: this model is quite small 117M parameter variant as in Huggingface's [GPT-2 config](https://huggingface.co/gpt2), so not the famous big 1.5B parameter variant by OpenAI. We also have bigger 345M parameter variant [gpt2-medium-finnish](https://huggingface.co/Finnish-NLP/gpt2-medium-finnish) and 774M parameter variant [gpt2-large-finnish](https://huggingface.co/Finnish-NLP/gpt2-large-finnish) available which perform better compared to this model. ## Model description Finnish GPT-2 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ### How to use You can use this model directly with a pipeline for text generation: ```python >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='Finnish-NLP/gpt2-finnish') >>> generator("Tekstiä tuottava tekoäly on", max_length=30, num_return_sequences=5) [{'generated_text': 'Tekstiä tuottava tekoäly on kuin onkin hyvin pieni. Sitä voi käyttää myös hyvin nopeasti ja myös täysin automatisoituna, eikä sitä tarvitse käydä läpi. Se'}, {'generated_text': 'Tekstiä tuottava tekoäly on saanut jalansijaa, mutta Suomessa se on jo ehtinyt hajota käsiin, koska sen avulla ei pystytä tuottamaan täysin ajantasaisia'}, {'generated_text': 'Tekstiä tuottava tekoäly on tehnyt työtä kymmenien vuosien ajan ja ottanut käyttöön jo yli kahden vuosikymmenen ajan tekoälyn ratkaisuja. Tekoäly on jo pitkään tehnyt työtä'}, {'generated_text': 'Tekstiä tuottava tekoäly on tekoälyn sovellus, jota käytetään esimerkiksi liiketoiminnan ja päätöksenteon tukena. Työhön liittyy data-analyysin ohella tekoälyn avulla esimerkiksi tekoäl'}, {'generated_text': 'Tekstiä tuottava tekoäly on juuri nyt erityisen hyödyllinen, koska se tunnistaa käyttäjän tietokoneen ruudulla olevat ilmoitukset, kuten näytön värin ja osoittimet ilman välkyn'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('Finnish-NLP/gpt2-finnish') model = GPT2Model.from_pretrained('Finnish-NLP/gpt2-finnish') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('Finnish-NLP/gpt2-finnish') model = TFGPT2Model.from_pretrained('Finnish-NLP/gpt2-finnish', from_pt=True) text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. As with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Training data This Finnish GPT-2 model was pretrained on the combination of six datasets: - [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo). - [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset - [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501) - [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401) - [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001) - [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803) Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. ## Training procedure ### Preprocessing The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens. ### Pretraining The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 300k steps (a bit over 2 epochs, 256 batch size). The optimizer used was a second-order optimization method called [Distributed Shampoo](https://github.com/google-research/google-research/tree/master/scalable_shampoo) with learning rate 1e-4, learning rate warmup for 4000 steps and cosine decay of the learning rate after. At first, commonly used Adam optimizer was tried but there were significant issues getting the model to converge even with multiple different learning rate trials so then Adam optimizer was replaced with the Distributed Shampoo which worked a lot better. ## Evaluation results Evaluation was done using the *validation* split of the [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned) dataset with [Perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) loses to our bigger model variants. | | Perplexity | |------------------------------------------|------------| |Finnish-NLP/gpt2-finnish |44.19 | |Finnish-NLP/gpt2-medium-finnish |34.08 | |Finnish-NLP/gpt2-large-finnish |**30.74** | ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). ## Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) Feel free to contact us for more details 🤗
Finnish-NLP/gpt2-large-finnish
Finnish-NLP
gpt2
17
114
transformers
1
text-generation
true
false
true
apache-2.0
['fi']
['Finnish-NLP/mc4_fi_cleaned', 'wikipedia']
null
0
0
0
0
0
0
0
['finnish', 'gpt2']
false
true
true
7,197
# GPT-2 large for Finnish Pretrained GPT-2 large model on Finnish language using a causal language modeling (CLM) objective. GPT-2 was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). **Note**: this model is 774M parameter variant as in Huggingface's [GPT-2-large config](https://huggingface.co/gpt2-large), so not the famous big 1.5B parameter variant by OpenAI. ## Model description Finnish GPT-2 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ### How to use You can use this model directly with a pipeline for text generation: ```python >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='Finnish-NLP/gpt2-large-finnish') >>> generator("Tekstiä tuottava tekoäly on", max_length=30, num_return_sequences=5) [{'generated_text': 'Tekstiä tuottava tekoäly on valmis yhteistyöhön ihmisen kanssa: Tekoäly hoitaa ihmisen puolesta tekstin tuottamisen. Se myös ymmärtää, missä vaiheessa tekstiä voidaan alkaa kirjoittamaan'}, {'generated_text': 'Tekstiä tuottava tekoäly on älykäs, mutta se ei ole vain älykkäisiin koneisiin kuuluva älykäs olento, vaan se on myös kone. Se ei'}, {'generated_text': 'Tekstiä tuottava tekoäly on ehkä jo pian todellisuutta - se voisi tehdä myös vanhustenhoidosta nykyistä ä tuottava tekoäly on ehkä jo pian todellisuutta - se voisi tehdä'}, {'generated_text': 'Tekstiä tuottava tekoäly on kehitetty ihmisen ja ihmisen aivoihin yhteistyössä neurotieteiden ja käyttäytymistieteen tutkijatiimin kanssa. Uusi teknologia avaa aivan uudenlaisia tutkimusi'}, {'generated_text': 'Tekstiä tuottava tekoäly on kuin tietokone, jonka kanssa voi elää. Tekoälyn avulla voi kirjoittaa mitä tahansa, mistä tahansa ja miten paljon. Tässä'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('Finnish-NLP/gpt2-large-finnish') model = GPT2Model.from_pretrained('Finnish-NLP/gpt2-large-finnish') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('Finnish-NLP/gpt2-large-finnish') model = TFGPT2Model.from_pretrained('Finnish-NLP/gpt2-large-finnish', from_pt=True) text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. As with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Training data This Finnish GPT-2 model was pretrained on the combination of six datasets: - [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo). - [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset - [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501) - [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401) - [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001) - [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803) Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. ## Training procedure ### Preprocessing The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens. ### Pretraining The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 640k steps (a bit over 1 epoch, 64 batch size). The optimizer used was a AdamW with learning rate 4e-5, learning rate warmup for 4000 steps and cosine decay of the learning rate after. ## Evaluation results Evaluation was done using the *validation* split of the [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned) dataset with [Perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) performs better than our smaller model variants. | | Perplexity | |------------------------------------------|------------| |Finnish-NLP/gpt2-large-finnish |**30.74** | |Finnish-NLP/gpt2-medium-finnish |34.08 | |Finnish-NLP/gpt2-finnish |44.19 | ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). ## Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) Feel free to contact us for more details 🤗
Finnish-NLP/gpt2-medium-finnish
Finnish-NLP
gpt2
18
69
transformers
2
text-generation
true
false
true
apache-2.0
['fi']
['Finnish-NLP/mc4_fi_cleaned', 'wikipedia']
null
0
0
0
0
0
0
0
['finnish', 'gpt2']
false
true
true
7,653
# GPT-2 medium for Finnish Pretrained GPT-2 medium model on Finnish language using a causal language modeling (CLM) objective. GPT-2 was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). **Note**: this model is 345M parameter variant as in Huggingface's [GPT-2-medium config](https://huggingface.co/gpt2-medium), so not the famous big 1.5B parameter variant by OpenAI. We also have bigger 774M parameter variant [gpt2-large-finnish](https://huggingface.co/Finnish-NLP/gpt2-large-finnish) available which performs better compared to this model. ## Model description Finnish GPT-2 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ### How to use You can use this model directly with a pipeline for text generation: ```python >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='Finnish-NLP/gpt2-medium-finnish') >>> generator("Tekstiä tuottava tekoäly on", max_length=30, num_return_sequences=5) [{'generated_text': 'Tekstiä tuottava tekoäly on tullut ihmisten arkeen viime vuosina. Se auttaa hahmottamaan ja tulkitsemaan monimutkaisia kokonaisuuksia ja ilmiöitä, joita ihmiset tekevät esimerkiksi ruokakaupassa'}, {'generated_text': 'Tekstiä tuottava tekoäly on jo ottanut haltuunsa myös ihmisten käyttämiä sovelluksia ja esimerkiksi pankkipalveluita. Sen vuoksi tekoäly on tärkeä kumppani etenkin yritysten liiketoiminnan kehittämisessä.-'}, {'generated_text': 'Tekstiä tuottava tekoäly on tekoälylle luonnollinen valinta, sillä sen avulla voi kommunikoida ihmisten kanssa hyvin pitkälle samalla tavalla kuin tietokoneiden kanssa. Se on kehittynyt muun'}, {'generated_text': 'Tekstiä tuottava tekoäly on ihmisen kehittämä tekoäly, jota ei vielä ole pystytty rakentamaan. Tekoäly kykenee toimimaan esimerkiksi matemaattisissa, tilastollisissa ja sosiaalisissa'}, {'generated_text': 'Tekstiä tuottava tekoäly on jo niin iso juttu ettei sitä kannata rajoittaakaan. Ja jos se saadaan käyttöön, niin se voi jo pian syrjäyttää perinteisen'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('Finnish-NLP/gpt2-medium-finnish') model = GPT2Model.from_pretrained('Finnish-NLP/gpt2-medium-finnish') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('Finnish-NLP/gpt2-medium-finnish') model = TFGPT2Model.from_pretrained('Finnish-NLP/gpt2-medium-finnish', from_pt=True) text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. As with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Training data This Finnish GPT-2 model was pretrained on the combination of six datasets: - [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo). - [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset - [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501) - [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401) - [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001) - [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803) Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. ## Training procedure ### Preprocessing The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens. ### Pretraining The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 360k steps (a bit over 1 epoch, 128 batch size). The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 4000 steps and cosine decay of the learning rate after. ## Evaluation results Evaluation was done using the *validation* split of the [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned) dataset with [Perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) performs better than our smaller [gpt2-finnish](https://huggingface.co/Finnish-NLP/gpt2-finnish) model variant but loses to our bigger [gpt2-large-finnish](https://huggingface.co/Finnish-NLP/gpt2-large-finnish) model. | | Perplexity | |------------------------------------------|------------| |Finnish-NLP/gpt2-medium-finnish |34.08 | |Finnish-NLP/gpt2-finnish |44.19 | |Finnish-NLP/gpt2-large-finnish |**30.74** | ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). ## Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) Feel free to contact us for more details 🤗
Finnish-NLP/roberta-large-finnish-v2
Finnish-NLP
roberta
17
18
transformers
0
fill-mask
true
false
true
apache-2.0
['fi']
['Finnish-NLP/mc4_fi_cleaned', 'wikipedia']
null
0
0
0
0
0
0
0
['finnish', 'roberta']
false
true
true
8,912
# RoBERTa large model for Finnish This **Finnish-NLP/roberta-large-finnish-v2** model is a new version of the previously trained [Finnish-NLP/roberta-large-finnish](https://huggingface.co/Finnish-NLP/roberta-large-finnish) model. Training hyperparameters were same but the training dataset was cleaned better with the goal to get better performing language model through the better cleaned data. Based on the model evaluations (check the table at the end), slightly better cleaned data didn't seem to produce better performing model. Pretrained RoBERTa model on Finnish language using a masked language modeling (MLM) objective. RoBERTa was introduced in [this paper](https://arxiv.org/abs/1907.11692) and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it makes a difference between finnish and Finnish. ## Model description Finnish RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the RoBERTa model as inputs. ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Finnish-NLP/roberta-large-finnish-v2') >>> unmasker("Moikka olen <mask> kielimalli.") [{'score': 0.04741518571972847, 'token': 763, 'token_str': ' hyvä', 'sequence': 'Moikka olen hyvä kielimalli.'}, {'score': 0.036977022886276245, 'token': 505, 'token_str': ' myös', 'sequence': 'Moikka olen myös kielimalli.'}, {'score': 0.025283709168434143, 'token': 3089, 'token_str': ' huono', 'sequence': 'Moikka olen huono kielimalli.'}, {'score': 0.022848006337881088, 'token': 1852, 'token_str': ' toinen', 'sequence': 'Moikka olen toinen kielimalli.'}, {'score': 0.019232941791415215, 'token': 1029, 'token_str': ' siis', 'sequence': 'Moikka olen siis kielimalli.'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('Finnish-NLP/roberta-large-finnish-v2') model = RobertaModel.from_pretrained('Finnish-NLP/roberta-large-finnish-v2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('Finnish-NLP/roberta-large-finnish-v2') model = TFRobertaModel.from_pretrained('Finnish-NLP/roberta-large-finnish-v2', from_pt=True) text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. ## Training data This Finnish RoBERTa model was pretrained on the combination of five datasets: - [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo). - [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset - [Yle Finnish News Archive](http://urn.fi/urn:nbn:fi:lb-2017070501) - [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001) - [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803) Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. ## Training procedure ### Preprocessing The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked with `<s>` and the end of one by `</s>` The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `<mask>`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). ### Pretraining The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 520k train steps (2 epochs, batch size 512) with a sequence length of 128 and continuing for 520k steps (1 epoch, batch size 64) with a sequence length of 512. The optimizer used for the 128 sequence training was AdamW, and for the 512 sequence training it was Adafactor (to save memory). Learning rate was 2e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and \\(\epsilon = 1e-6\\), learning rate warmup for 1500 steps and linear decay of the learning rate after. ## Evaluation results Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length. When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model and to our previous [Finnish-NLP/roberta-large-finnish](https://huggingface.co/Finnish-NLP/roberta-large-finnish) model: | | Average | Yle News 128 length | Yle News 512 length | Eduskunta 128 length | |----------------------------------------|----------|---------------------|---------------------|----------------------| |Finnish-NLP/roberta-large-finnish-v2 |88.17 |94.46 |95.22 |74.83 | |Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 | |TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |**94.90** |**95.49** |**76.07** | To conclude, this model didn't significantly improve compared to our previous [Finnish-NLP/roberta-large-finnish](https://huggingface.co/Finnish-NLP/roberta-large-finnish) model. This model is also slightly (~ 1%) losing to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model. ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). ## Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) Feel free to contact us for more details 🤗
Finnish-NLP/roberta-large-finnish
Finnish-NLP
roberta
18
27
transformers
2
fill-mask
true
false
true
apache-2.0
['fi']
['Finnish-NLP/mc4_fi_cleaned', 'wikipedia']
null
0
0
0
0
0
0
0
['finnish', 'roberta']
false
true
true
8,236
# RoBERTa large model for Finnish Pretrained RoBERTa model on Finnish language using a masked language modeling (MLM) objective. RoBERTa was introduced in [this paper](https://arxiv.org/abs/1907.11692) and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it makes a difference between finnish and Finnish. ## Model description Finnish RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the RoBERTa model as inputs. ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Finnish-NLP/roberta-large-finnish') >>> unmasker("Moikka olen <mask> kielimalli.") [{'sequence': 'Moikka olen hyvä kielimalli.', 'score': 0.1535797119140625, 'token': 767, 'token_str': ' hyvä'}, {'sequence': 'Moikka olen paras kielimalli.', 'score': 0.04795042425394058, 'token': 2888, 'token_str': ' paras'}, {'sequence': 'Moikka olen huono kielimalli.', 'score': 0.04251479730010033, 'token': 3217, 'token_str': ' huono'}, {'sequence': 'Moikka olen myös kielimalli.', 'score': 0.027469098567962646, 'token': 520, 'token_str': ' myös'}, {'sequence': 'Moikka olen se kielimalli.', 'score': 0.013878575526177883, 'token': 358, 'token_str': ' se'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('Finnish-NLP/roberta-large-finnish') model = RobertaModel.from_pretrained('Finnish-NLP/roberta-large-finnish') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('Finnish-NLP/roberta-large-finnish') model = TFRobertaModel.from_pretrained('Finnish-NLP/roberta-large-finnish', from_pt=True) text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. ## Training data This Finnish RoBERTa model was pretrained on the combination of five datasets: - [mc4](https://huggingface.co/datasets/mc4), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset - [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset - [Yle Finnish News Archive](http://urn.fi/urn:nbn:fi:lb-2017070501) - [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001) - [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803) Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 78GB of text. ## Training procedure ### Preprocessing The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked with `<s>` and the end of one by `</s>` The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `<mask>`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). ### Pretraining The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 2 epochs with a sequence length of 128 and continuing for one more epoch with a sequence length of 512. The optimizer used is Adafactor with a learning rate of 2e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and \\(\epsilon = 1e-6\\), learning rate warmup for 1500 steps and linear decay of the learning rate after. ## Evaluation results Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length. When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) and to our previous [Finnish RoBERTa-large](https://huggingface.co/flax-community/RoBERTa-large-finnish) trained during the Hugging Face JAX/Flax community week: | | Average | Yle News 128 length | Yle News 512 length | Eduskunta 128 length | |----------------------------------------|----------|---------------------|---------------------|----------------------| |Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 | |TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |**94.90** |**95.49** |**76.07** | |flax-community/RoBERTa-large-finnish |87.72 |94.42 |95.06 |73.67 | To conclude, this model improves on our previous [Finnish RoBERTa-large](https://huggingface.co/flax-community/RoBERTa-large-finnish) model trained during the Hugging Face JAX/Flax community week but is still slightly (~ 1%) losing to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model. ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). ## Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) - Tommi Vehviläinen [Hugging Face profile](https://huggingface.co/Tommi) Feel free to contact us for more details 🤗
Finnish-NLP/roberta-large-wechsel-finnish
Finnish-NLP
roberta
16
1
transformers
1
fill-mask
true
false
true
apache-2.0
['fi']
['Finnish-NLP/mc4_fi_cleaned', 'wikipedia']
null
0
0
0
0
0
0
0
['finnish', 'roberta']
false
true
true
10,586
# RoBERTa large model trained with WECHSEL method for Finnish Pretrained RoBERTa model on Finnish language using a masked language modeling (MLM) objective with WECHSEL method. RoBERTa was introduced in [this paper](https://arxiv.org/abs/1907.11692) and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). WECHSEL method (Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models) was introduced in [this paper](https://arxiv.org/abs/2112.06598) and first released in [this repository](https://github.com/CPJKU/wechsel). This model is case-sensitive: it makes a difference between finnish and Finnish. ## Model description Finnish RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the RoBERTa model as inputs. ## WECHSEL method Using the WECHSEL method, we first took the pretrained English [roberta-large](https://huggingface.co/roberta-large) model, changed its tokenizer with our Finnish tokenizer and initialized model's token embeddings such that they are close to semantically similar English tokens by utilizing multilingual static word embeddings (by fastText) covering English and Finnish. We were able to confirm the WECHSEL paper's findings that using this method you can save pretraining time and thus computing resources. To get idea of the WECHSEL method's training time savings you can check the table below illustrating the MLM evaluation accuracies during the pretraining compared to the [Finnish-NLP/roberta-large-finnish-v2](https://huggingface.co/Finnish-NLP/roberta-large-finnish-v2) which was trained from scratch: | | 10k train steps | 100k train steps | 200k train steps | 270k train steps | |------------------------------------------|------------------|------------------|------------------|------------------| |Finnish-NLP/roberta-large-wechsel-finnish |37.61 eval acc |58.14 eval acc |61.60 eval acc |62.77 eval acc | |Finnish-NLP/roberta-large-finnish-v2 |13.83 eval acc |55.87 eval acc |58.58 eval acc |59.47 eval acc | Downstream finetuning text classification tests can be found from the end but there this model trained with WECHSEL method didn't significantly improve the downstream performances. However, based on tens of qualitative fill-mask task example tests we noticed that for fill-mask task this WECHSEL model significantly outperforms our other models trained from scratch. ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Finnish-NLP/roberta-large-wechsel-finnish') >>> unmasker("Moikka olen <mask> kielimalli.") [{'sequence': 'Moikka olen hyvä kielimalli.', 'score': 0.07757357507944107, 'token': 763, 'token_str': ' hyvä'}, {'sequence': 'Moikka olen suomen kielimalli.', 'score': 0.05297883599996567, 'token': 3641, 'token_str': ' suomen'}, {'sequence': 'Moikka olen kuin kielimalli.', 'score': 0.03747279942035675, 'token': 523, 'token_str': ' kuin'}, {'sequence': 'Moikka olen suomalainen kielimalli.', 'score': 0.031031042337417603, 'token': 4966, 'token_str': ' suomalainen'}, {'sequence': 'Moikka olen myös kielimalli.', 'score': 0.026489052921533585, 'token': 505, 'token_str': ' myös'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('Finnish-NLP/roberta-large-wechsel-finnish') model = RobertaModel.from_pretrained('Finnish-NLP/roberta-large-wechsel-finnish') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('Finnish-NLP/roberta-large-wechsel-finnish') model = TFRobertaModel.from_pretrained('Finnish-NLP/roberta-large-wechsel-finnish', from_pt=True) text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. ## Training data This Finnish RoBERTa model was pretrained on the combination of five datasets: - [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo). - [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset - [Yle Finnish News Archive](http://urn.fi/urn:nbn:fi:lb-2017070501) - [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001) - [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803) Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. ## Training procedure ### Preprocessing The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked with `<s>` and the end of one by `</s>` The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `<mask>`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). ### Pretraining The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 270k steps (a bit over 1 epoch, 512 batch size) with a sequence length of 128 and continuing for 180k steps (batch size 64) with a sequence length of 512. The optimizer used was Adafactor (to save memory). Learning rate was 2e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and \\(\epsilon = 1e-6\\), learning rate warmup for 2500 steps and linear decay of the learning rate after. ## Evaluation results Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length. When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model and to our previous [Finnish-NLP/roberta-large-finnish-v2](https://huggingface.co/Finnish-NLP/roberta-large-finnish-v2) and [Finnish-NLP/roberta-large-finnish](https://huggingface.co/Finnish-NLP/roberta-large-finnish) models: | | Average | Yle News 128 length | Yle News 512 length | Eduskunta 128 length | |------------------------------------------|----------|---------------------|---------------------|----------------------| |Finnish-NLP/roberta-large-wechsel-finnish |88.19 |**94.91** |95.18 |74.47 | |Finnish-NLP/roberta-large-finnish-v2 |88.17 |94.46 |95.22 |74.83 | |Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 | |TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |94.90 |**95.49** |**76.07** | To conclude, this model didn't significantly improve compared to our previous models which were trained from scratch instead of using the WECHSEL method as in this model. This model is also slightly (~ 1%) losing to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model. ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). ## Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) Feel free to contact us for more details 🤗
Firat/albert-base-v2-finetuned-squad
Firat
albert
9
6
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,252
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-finetuned-squad This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 0.9901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.8584 | 1.0 | 5540 | 0.9056 | | 0.6473 | 2.0 | 11080 | 0.8975 | | 0.4801 | 3.0 | 16620 | 0.9901 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
Firat/distilbert-base-uncased-finetuned-squad
Firat
distilbert
12
3
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,274
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1460 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2856 | 1.0 | 2767 | 1.1919 | | 1.012 | 2.0 | 5534 | 1.1332 | | 0.8512 | 3.0 | 8301 | 1.1460 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1 - Datasets 1.18.0 - Tokenizers 0.10.3
Firat/roberta-base-finetuned-squad
Firat
roberta
11
3
transformers
0
question-answering
true
false
false
mit
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,246
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-squad This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 0.8953 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.8926 | 1.0 | 5536 | 0.8694 | | 0.6821 | 2.0 | 11072 | 0.8428 | | 0.5335 | 3.0 | 16608 | 0.8953 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
FitoDS/wav2vec2-large-xls-r-300m-guarani-colab
FitoDS
wav2vec2
11
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,434
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-guarani-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2392 - Wer: 1.0743 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 18.2131 | 49.94 | 400 | 3.2901 | 1.0 | | 2.0496 | 99.94 | 800 | 3.2392 | 1.0743 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
FitoDS/xls-r-ab-test
FitoDS
wav2vec2
18
5
transformers
0
automatic-speech-recognition
true
false
false
null
['ab']
['common_voice']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer']
true
true
true
1,153
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - AB dataset. It achieves the following results on the evaluation set: - Loss: 133.5167 - Wer: 18.9286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
ForutanRad/bert-fa-QA-v1
ForutanRad
bert
16
53
transformers
1
question-answering
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
false
true
true
1,206
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-fa-QA-v1 Persian Question and answer Model Based on Bert Model This model is a fine-tuned version of [ParsBERT](https://arxiv.org/abs/2005.12515) on PersianQA dataset. It achieves the following results on the evaluation set: - Loss: 1.7297 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.2563 | 1.0 | 1126 | 1.7222 | | 1.3372 | 2.0 | 2252 | 1.7297 | ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3