Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
{}
davinan/pico-bioelectra
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
Relevance prediction model
{}
davinan/relevance_prediction
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
davisxergs/DialogGPT-small-harrypotter
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dawoodkhan82/test
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
day/first-bot-large
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
day/first-bot-medium
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
day/first-bot-small
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
day/her-bot-small
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{}
dayyass/trocr-base-handwritten-vit-encoder
null
[ "transformers", "pytorch", "vit", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dbOUOdb/discord
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
A small french language model for french text generation (and possibly more NLP tasks...) **Introduction** This french gpt2 model is based on openai GPT-2 small model. It was trained on a <b>very small (190Mb) dataset </b> from french wikipedia using Transfer Learning and Fine-tuning techniques in just over a day, on one Colab pro with 1GPU 16GB. It was created applying the recept of <b>Pierre Guillou</b> See https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787 It is a proof-of-concept that makes possible to get a language model in any language with low ressources. It was fine-tuned from the English pre-trained GPT-2 small using the Hugging Face libraries (Transformers and Tokenizers) wrapped into the fastai v2 Deep Learning framework. All the fine-tuning fastai v2 techniques were used. It is now available on Hugging Face. For further information or requests, please go to "Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)". Model migth be improved by using larger dataset under larger powerful training infrastructure. At least this one can be used for small finetuning experimentation (i.e with aitextgen). PS : I've lost the metrics but it speaks french with some minor grammar issues, coherence of text is somehow limited.
{"language": "fr", "tags": ["french", "gpt2", "model"]}
dbddv01/gpt2-french-small
null
[ "transformers", "pytorch", "jax", "safetensors", "gpt2", "text-generation", "french", "model", "fr", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-1b-italian-robust This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the Common Voice 7 & Libri Speech datasets. It achieves the following results on the evaluation set: - Loss: 0.2428 - Wer: 0.2960 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | No log | 0.07 | 400 | 1.0053 | 0.8058 | | 1.5087 | 0.13 | 800 | 0.9127 | 0.8104 | | 0.9552 | 0.2 | 1200 | 1.0360 | 0.8836 | | 0.9555 | 0.27 | 1600 | 0.9980 | 0.8577 | | 1.0259 | 0.34 | 2000 | 1.0103 | 0.8842 | | 1.0259 | 0.4 | 2400 | 0.9119 | 0.8466 | | 1.0365 | 0.47 | 2800 | 0.9000 | 0.8281 | | 1.0069 | 0.54 | 3200 | 0.7976 | 0.7875 | | 0.9688 | 0.61 | 3600 | 0.8126 | 0.8051 | | 0.9638 | 0.67 | 4000 | 0.7921 | 0.7903 | | 0.9638 | 0.74 | 4400 | 0.7703 | 0.7783 | | 0.9327 | 0.81 | 4800 | 0.7253 | 0.7463 | | 0.8992 | 0.88 | 5200 | 0.6841 | 0.7171 | | 0.8693 | 0.94 | 5600 | 0.6867 | 0.7250 | | 0.8433 | 1.01 | 6000 | 0.7077 | 0.7302 | | 0.8433 | 1.08 | 6400 | 0.6685 | 0.7091 | | 0.8499 | 1.14 | 6800 | 0.6355 | 0.6825 | | 0.8159 | 1.21 | 7200 | 0.6283 | 0.6800 | | 0.8001 | 1.28 | 7600 | 0.6288 | 0.6743 | | 0.7883 | 1.35 | 8000 | 0.5995 | 0.6633 | | 0.7883 | 1.41 | 8400 | 0.6195 | 0.6726 | | 0.7863 | 1.48 | 8800 | 0.6039 | 0.6588 | | 0.7713 | 1.55 | 9200 | 0.5842 | 0.6490 | | 0.7572 | 1.62 | 9600 | 0.5975 | 0.6533 | | 0.7442 | 1.68 | 10000 | 0.5508 | 0.6233 | | 0.7442 | 1.75 | 10400 | 0.5521 | 0.6209 | | 0.7296 | 1.82 | 10800 | 0.5760 | 0.6245 | | 0.7205 | 1.89 | 11200 | 0.5593 | 0.6144 | | 0.7106 | 1.95 | 11600 | 0.5672 | 0.6220 | | 0.7146 | 2.02 | 12000 | 0.5134 | 0.5911 | | 0.7146 | 2.09 | 12400 | 0.5069 | 0.5811 | | 0.6944 | 2.15 | 12800 | 0.5022 | 0.5962 | | 0.6817 | 2.22 | 13200 | 0.4989 | 0.5813 | | 0.6721 | 2.29 | 13600 | 0.4941 | 0.5742 | | 0.6774 | 2.36 | 14000 | 0.4775 | 0.5676 | | 0.6774 | 2.42 | 14400 | 0.4694 | 0.5525 | | 0.6621 | 2.49 | 14800 | 0.4720 | 0.5514 | | 0.6599 | 2.56 | 15200 | 0.4714 | 0.5553 | | 0.6591 | 2.63 | 15600 | 0.4578 | 0.5397 | | 0.645 | 2.69 | 16000 | 0.4619 | 0.5452 | | 0.645 | 2.76 | 16400 | 0.4578 | 0.5343 | | 0.6431 | 2.83 | 16800 | 0.4514 | 0.5328 | | 0.636 | 2.9 | 17200 | 0.4526 | 0.5325 | | 0.6433 | 2.96 | 17600 | 0.4561 | 0.5325 | | 0.6356 | 3.03 | 18000 | 0.4386 | 0.5191 | | 0.6356 | 3.1 | 18400 | 0.4291 | 0.5065 | | 0.6175 | 3.16 | 18800 | 0.4306 | 0.5170 | | 0.6187 | 3.23 | 19200 | 0.4256 | 0.5036 | | 0.607 | 3.3 | 19600 | 0.4198 | 0.5027 | | 0.6004 | 3.37 | 20000 | 0.4149 | 0.4906 | | 0.6004 | 3.43 | 20400 | 0.4114 | 0.4902 | | 0.6002 | 3.5 | 20800 | 0.4116 | 0.4967 | | 0.5926 | 3.57 | 21200 | 0.4066 | 0.4843 | | 0.5836 | 3.64 | 21600 | 0.3956 | 0.4791 | | 0.588 | 3.7 | 22000 | 0.3941 | 0.4729 | | 0.588 | 3.77 | 22400 | 0.3972 | 0.4799 | | 0.5739 | 3.84 | 22800 | 0.4018 | 0.4790 | | 0.5778 | 3.91 | 23200 | 0.3936 | 0.4750 | | 0.5768 | 3.97 | 23600 | 0.3936 | 0.4751 | | 0.5651 | 4.04 | 24000 | 0.3953 | 0.4706 | | 0.5651 | 4.11 | 24400 | 0.3906 | 0.4659 | | 0.5704 | 4.17 | 24800 | 0.3807 | 0.4557 | | 0.5594 | 4.24 | 25200 | 0.3817 | 0.4610 | | 0.5509 | 4.31 | 25600 | 0.3755 | 0.4553 | | 0.5439 | 4.38 | 26000 | 0.3705 | 0.4471 | | 0.5439 | 4.44 | 26400 | 0.3744 | 0.4487 | | 0.5426 | 4.51 | 26800 | 0.3716 | 0.4483 | | 0.5393 | 4.58 | 27200 | 0.3600 | 0.4356 | | 0.5408 | 4.65 | 27600 | 0.3573 | 0.4307 | | 0.5327 | 4.71 | 28000 | 0.3638 | 0.4382 | | 0.5327 | 4.78 | 28400 | 0.3587 | 0.4316 | | 0.5324 | 4.85 | 28800 | 0.3598 | 0.4290 | | 0.5378 | 4.91 | 29200 | 0.3508 | 0.4243 | | 0.5246 | 4.98 | 29600 | 0.3522 | 0.4260 | | 0.5284 | 5.05 | 30000 | 0.3520 | 0.4268 | | 0.5284 | 5.12 | 30400 | 0.3506 | 0.4224 | | 0.5154 | 5.18 | 30800 | 0.3556 | 0.4223 | | 0.5138 | 5.25 | 31200 | 0.3526 | 0.4276 | | 0.51 | 5.32 | 31600 | 0.3440 | 0.4220 | | 0.5065 | 5.39 | 32000 | 0.3367 | 0.4120 | | 0.5065 | 5.45 | 32400 | 0.3406 | 0.4136 | | 0.5087 | 5.52 | 32800 | 0.3370 | 0.4125 | | 0.503 | 5.59 | 33200 | 0.3387 | 0.4134 | | 0.5085 | 5.66 | 33600 | 0.3346 | 0.4068 | | 0.5044 | 5.72 | 34000 | 0.3325 | 0.4057 | | 0.5044 | 5.79 | 34400 | 0.3304 | 0.4026 | | 0.4879 | 5.86 | 34800 | 0.3274 | 0.4002 | | 0.4924 | 5.92 | 35200 | 0.3286 | 0.3980 | | 0.4991 | 5.99 | 35600 | 0.3231 | 0.3952 | | 0.487 | 6.06 | 36000 | 0.3324 | 0.4005 | | 0.487 | 6.13 | 36400 | 0.3264 | 0.3952 | | 0.4754 | 6.19 | 36800 | 0.3234 | 0.3905 | | 0.4683 | 6.26 | 37200 | 0.3149 | 0.3840 | | 0.4653 | 6.33 | 37600 | 0.3122 | 0.3824 | | 0.4667 | 6.4 | 38000 | 0.3151 | 0.3855 | | 0.4667 | 6.46 | 38400 | 0.3217 | 0.3859 | | 0.4628 | 6.53 | 38800 | 0.3085 | 0.3831 | | 0.4644 | 6.6 | 39200 | 0.3121 | 0.3791 | | 0.4612 | 6.67 | 39600 | 0.3093 | 0.3790 | | 0.4552 | 6.73 | 40000 | 0.3087 | 0.3749 | | 0.4552 | 6.8 | 40400 | 0.3027 | 0.3679 | | 0.4544 | 6.87 | 40800 | 0.3048 | 0.3672 | | 0.4507 | 6.93 | 41200 | 0.2963 | 0.3614 | | 0.4489 | 7.0 | 41600 | 0.3086 | 0.3718 | | 0.4367 | 7.07 | 42000 | 0.3100 | 0.3754 | | 0.4367 | 7.14 | 42400 | 0.3057 | 0.3701 | | 0.4376 | 7.2 | 42800 | 0.2930 | 0.3614 | | 0.428 | 7.27 | 43200 | 0.2907 | 0.3516 | | 0.4241 | 7.34 | 43600 | 0.2916 | 0.3590 | | 0.4312 | 7.41 | 44000 | 0.2904 | 0.3523 | | 0.4312 | 7.47 | 44400 | 0.2908 | 0.3476 | | 0.4292 | 7.54 | 44800 | 0.2858 | 0.3467 | | 0.426 | 7.61 | 45200 | 0.2864 | 0.3484 | | 0.4225 | 7.68 | 45600 | 0.2820 | 0.3441 | | 0.422 | 7.74 | 46000 | 0.2834 | 0.3441 | | 0.422 | 7.81 | 46400 | 0.2784 | 0.3420 | | 0.4158 | 7.88 | 46800 | 0.2814 | 0.3390 | | 0.4139 | 7.94 | 47200 | 0.2777 | 0.3384 | | 0.4076 | 8.01 | 47600 | 0.2741 | 0.3381 | | 0.3997 | 8.08 | 48000 | 0.2738 | 0.3320 | | 0.3997 | 8.15 | 48400 | 0.2720 | 0.3303 | | 0.4009 | 8.21 | 48800 | 0.2705 | 0.3357 | | 0.3928 | 8.28 | 49200 | 0.2708 | 0.3265 | | 0.3923 | 8.35 | 49600 | 0.2678 | 0.3283 | | 0.3897 | 8.42 | 50000 | 0.2649 | 0.3241 | | 0.3897 | 8.48 | 50400 | 0.2640 | 0.3218 | | 0.3879 | 8.55 | 50800 | 0.2616 | 0.3197 | | 0.3805 | 8.62 | 51200 | 0.2599 | 0.3170 | | 0.3874 | 8.69 | 51600 | 0.2592 | 0.3168 | | 0.3799 | 8.75 | 52000 | 0.2589 | 0.3157 | | 0.3799 | 8.82 | 52400 | 0.2566 | 0.3137 | | 0.3834 | 8.89 | 52800 | 0.2552 | 0.3141 | | 0.3811 | 8.95 | 53200 | 0.2523 | 0.3108 | | 0.3821 | 9.02 | 53600 | 0.2539 | 0.3112 | | 0.3636 | 9.09 | 54000 | 0.2529 | 0.3070 | | 0.3636 | 9.16 | 54400 | 0.2500 | 0.3078 | | 0.3706 | 9.22 | 54800 | 0.2510 | 0.3067 | | 0.367 | 9.29 | 55200 | 0.2497 | 0.3069 | | 0.3618 | 9.36 | 55600 | 0.2493 | 0.3043 | | 0.3624 | 9.43 | 56000 | 0.2491 | 0.3040 | | 0.3624 | 9.49 | 56400 | 0.2466 | 0.3016 | | 0.3557 | 9.56 | 56800 | 0.2460 | 0.3014 | | 0.3536 | 9.63 | 57200 | 0.2470 | 0.2997 | | 0.3584 | 9.7 | 57600 | 0.2441 | 0.2989 | | 0.3563 | 9.76 | 58000 | 0.2442 | 0.2970 | | 0.3563 | 9.83 | 58400 | 0.2436 | 0.2966 | | 0.3492 | 9.9 | 58800 | 0.2431 | 0.2967 | | 0.3483 | 9.96 | 59200 | 0.2428 | 0.2960 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["it"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-1b - Italian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "it"}, "metrics": [{"type": "wer", "value": 32.74, "name": "Test WER"}, {"type": "cer", "value": 7.83, "name": "Test CER"}, {"type": "wer", "value": 19.55, "name": "Test WER (+LM)"}, {"type": "cer", "value": 5.59, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "it"}, "metrics": [{"type": "wer", "value": 43.23, "name": "Test WER"}, {"type": "cer", "value": 13.37, "name": "Test CER"}, {"type": "wer", "value": 27.51, "name": "Test WER (+LM)"}, {"type": "cer", "value": 10.69, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "it"}, "metrics": [{"type": "wer", "value": 51.12, "name": "Test WER"}]}]}]}
dbdmg/wav2vec2-xls-r-1b-italian-robust
null
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "it", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-italian-robust This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the Italian splits of the following datasets: - Mozilla Foundation Common Voice V7 dataset - [LibriSpeech multilingual](http://www.openslr.org/94) - [TED multilingual](https://www.openslr.org/100/) - [Voxforge](http://www.voxforge.org/it/Downloads) - [M-AILABS Speech Dataset](https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/) - [EuroParl-ST](https://www.mllp.upv.es/europarl-st/) - [EMOVO](http://voice.fub.it/activities/corpora/emovo/index.html) - [MSPKA](http://www.mspkacorpus.it/) ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | No log | 0.06 | 400 | 0.7508 | 0.7354 | | 2.3127 | 0.11 | 800 | 0.5888 | 0.5882 | | 0.7256 | 0.17 | 1200 | 0.5121 | 0.5247 | | 0.6692 | 0.22 | 1600 | 0.4774 | 0.5028 | | 0.6384 | 0.28 | 2000 | 0.4832 | 0.4885 | | 0.6384 | 0.33 | 2400 | 0.4410 | 0.4581 | | 0.6199 | 0.39 | 2800 | 0.4160 | 0.4331 | | 0.5972 | 0.44 | 3200 | 0.4136 | 0.4275 | | 0.6048 | 0.5 | 3600 | 0.4362 | 0.4538 | | 0.5627 | 0.55 | 4000 | 0.4313 | 0.4469 | | 0.5627 | 0.61 | 4400 | 0.4425 | 0.4579 | | 0.5855 | 0.66 | 4800 | 0.3859 | 0.4133 | | 0.5702 | 0.72 | 5200 | 0.3974 | 0.4097 | | 0.55 | 0.77 | 5600 | 0.3931 | 0.4134 | | 0.5624 | 0.83 | 6000 | 0.3900 | 0.4126 | | 0.5624 | 0.88 | 6400 | 0.3622 | 0.3899 | | 0.5615 | 0.94 | 6800 | 0.3755 | 0.4067 | | 0.5472 | 0.99 | 7200 | 0.3980 | 0.4284 | | 0.5663 | 1.05 | 7600 | 0.3553 | 0.3782 | | 0.5189 | 1.1 | 8000 | 0.3538 | 0.3726 | | 0.5189 | 1.16 | 8400 | 0.3425 | 0.3624 | | 0.518 | 1.21 | 8800 | 0.3431 | 0.3651 | | 0.5399 | 1.27 | 9200 | 0.3442 | 0.3573 | | 0.5303 | 1.32 | 9600 | 0.3241 | 0.3404 | | 0.5043 | 1.38 | 10000 | 0.3175 | 0.3378 | | 0.5043 | 1.43 | 10400 | 0.3265 | 0.3501 | | 0.4968 | 1.49 | 10800 | 0.3539 | 0.3703 | | 0.5102 | 1.54 | 11200 | 0.3323 | 0.3506 | | 0.5008 | 1.6 | 11600 | 0.3188 | 0.3433 | | 0.4996 | 1.65 | 12000 | 0.3162 | 0.3388 | | 0.4996 | 1.71 | 12400 | 0.3353 | 0.3552 | | 0.5007 | 1.76 | 12800 | 0.3152 | 0.3317 | | 0.4956 | 1.82 | 13200 | 0.3207 | 0.3430 | | 0.5205 | 1.87 | 13600 | 0.3239 | 0.3430 | | 0.4829 | 1.93 | 14000 | 0.3134 | 0.3266 | | 0.4829 | 1.98 | 14400 | 0.3039 | 0.3291 | | 0.5251 | 2.04 | 14800 | 0.2944 | 0.3169 | | 0.4872 | 2.09 | 15200 | 0.3061 | 0.3228 | | 0.4805 | 2.15 | 15600 | 0.3034 | 0.3152 | | 0.4949 | 2.2 | 16000 | 0.2896 | 0.3066 | | 0.4949 | 2.26 | 16400 | 0.3059 | 0.3344 | | 0.468 | 2.31 | 16800 | 0.2932 | 0.3111 | | 0.4637 | 2.37 | 17200 | 0.2890 | 0.3074 | | 0.4638 | 2.42 | 17600 | 0.2893 | 0.3112 | | 0.4728 | 2.48 | 18000 | 0.2832 | 0.3013 | | 0.4728 | 2.54 | 18400 | 0.2921 | 0.3065 | | 0.456 | 2.59 | 18800 | 0.2961 | 0.3104 | | 0.4628 | 2.65 | 19200 | 0.2886 | 0.3109 | | 0.4534 | 2.7 | 19600 | 0.2828 | 0.3020 | | 0.4578 | 2.76 | 20000 | 0.2805 | 0.3026 | | 0.4578 | 2.81 | 20400 | 0.2796 | 0.2987 | | 0.4702 | 2.87 | 20800 | 0.2748 | 0.2906 | | 0.4487 | 2.92 | 21200 | 0.2819 | 0.3008 | | 0.4411 | 2.98 | 21600 | 0.2722 | 0.2868 | | 0.4631 | 3.03 | 22000 | 0.2814 | 0.2974 | | 0.4631 | 3.09 | 22400 | 0.2762 | 0.2894 | | 0.4591 | 3.14 | 22800 | 0.2802 | 0.2980 | | 0.4349 | 3.2 | 23200 | 0.2748 | 0.2951 | | 0.4339 | 3.25 | 23600 | 0.2792 | 0.2927 | | 0.4254 | 3.31 | 24000 | 0.2712 | 0.2911 | | 0.4254 | 3.36 | 24400 | 0.2719 | 0.2892 | | 0.4317 | 3.42 | 24800 | 0.2686 | 0.2861 | | 0.4282 | 3.47 | 25200 | 0.2632 | 0.2861 | | 0.4262 | 3.53 | 25600 | 0.2633 | 0.2817 | | 0.4162 | 3.58 | 26000 | 0.2561 | 0.2765 | | 0.4162 | 3.64 | 26400 | 0.2613 | 0.2847 | | 0.414 | 3.69 | 26800 | 0.2679 | 0.2824 | | 0.4132 | 3.75 | 27200 | 0.2569 | 0.2813 | | 0.405 | 3.8 | 27600 | 0.2589 | 0.2785 | | 0.4128 | 3.86 | 28000 | 0.2611 | 0.2714 | | 0.4128 | 3.91 | 28400 | 0.2548 | 0.2731 | | 0.4174 | 3.97 | 28800 | 0.2574 | 0.2716 | | 0.421 | 4.02 | 29200 | 0.2529 | 0.2700 | | 0.4109 | 4.08 | 29600 | 0.2547 | 0.2682 | | 0.4027 | 4.13 | 30000 | 0.2578 | 0.2758 | | 0.4027 | 4.19 | 30400 | 0.2511 | 0.2715 | | 0.4075 | 4.24 | 30800 | 0.2507 | 0.2601 | | 0.3947 | 4.3 | 31200 | 0.2552 | 0.2711 | | 0.4042 | 4.35 | 31600 | 0.2530 | 0.2695 | | 0.3907 | 4.41 | 32000 | 0.2543 | 0.2738 | | 0.3907 | 4.46 | 32400 | 0.2491 | 0.2629 | | 0.3895 | 4.52 | 32800 | 0.2471 | 0.2611 | | 0.3901 | 4.57 | 33200 | 0.2404 | 0.2559 | | 0.3818 | 4.63 | 33600 | 0.2378 | 0.2583 | | 0.3831 | 4.68 | 34000 | 0.2341 | 0.2499 | | 0.3831 | 4.74 | 34400 | 0.2379 | 0.2560 | | 0.3808 | 4.79 | 34800 | 0.2418 | 0.2553 | | 0.4015 | 4.85 | 35200 | 0.2378 | 0.2565 | | 0.407 | 4.9 | 35600 | 0.2375 | 0.2535 | | 0.38 | 4.96 | 36000 | 0.2329 | 0.2451 | | 0.38 | 5.02 | 36400 | 0.2541 | 0.2737 | | 0.3753 | 5.07 | 36800 | 0.2475 | 0.2580 | | 0.3701 | 5.13 | 37200 | 0.2356 | 0.2484 | | 0.3627 | 5.18 | 37600 | 0.2422 | 0.2552 | | 0.3652 | 5.24 | 38000 | 0.2353 | 0.2518 | | 0.3652 | 5.29 | 38400 | 0.2328 | 0.2452 | | 0.3667 | 5.35 | 38800 | 0.2358 | 0.2478 | | 0.3711 | 5.4 | 39200 | 0.2340 | 0.2463 | | 0.361 | 5.46 | 39600 | 0.2375 | 0.2452 | | 0.3655 | 5.51 | 40000 | 0.2292 | 0.2387 | | 0.3655 | 5.57 | 40400 | 0.2330 | 0.2432 | | 0.3637 | 5.62 | 40800 | 0.2242 | 0.2396 | | 0.3516 | 5.68 | 41200 | 0.2284 | 0.2394 | | 0.3498 | 5.73 | 41600 | 0.2254 | 0.2343 | | 0.3626 | 5.79 | 42000 | 0.2191 | 0.2318 | | 0.3626 | 5.84 | 42400 | 0.2261 | 0.2399 | | 0.3719 | 5.9 | 42800 | 0.2261 | 0.2411 | | 0.3563 | 5.95 | 43200 | 0.2259 | 0.2416 | | 0.3574 | 6.01 | 43600 | 0.2148 | 0.2249 | | 0.3339 | 6.06 | 44000 | 0.2173 | 0.2237 | | 0.3339 | 6.12 | 44400 | 0.2133 | 0.2238 | | 0.3303 | 6.17 | 44800 | 0.2193 | 0.2297 | | 0.331 | 6.23 | 45200 | 0.2122 | 0.2205 | | 0.3372 | 6.28 | 45600 | 0.2083 | 0.2215 | | 0.3427 | 6.34 | 46000 | 0.2079 | 0.2163 | | 0.3427 | 6.39 | 46400 | 0.2072 | 0.2154 | | 0.3215 | 6.45 | 46800 | 0.2067 | 0.2170 | | 0.3246 | 6.5 | 47200 | 0.2089 | 0.2183 | | 0.3217 | 6.56 | 47600 | 0.2030 | 0.2130 | | 0.3309 | 6.61 | 48000 | 0.2020 | 0.2123 | | 0.3309 | 6.67 | 48400 | 0.2054 | 0.2133 | | 0.3343 | 6.72 | 48800 | 0.2013 | 0.2128 | | 0.3213 | 6.78 | 49200 | 0.1971 | 0.2064 | | 0.3145 | 6.83 | 49600 | 0.2029 | 0.2107 | | 0.3274 | 6.89 | 50000 | 0.2038 | 0.2136 | | 0.3274 | 6.94 | 50400 | 0.1991 | 0.2064 | | 0.3202 | 7.0 | 50800 | 0.1970 | 0.2083 | | 0.314 | 7.05 | 51200 | 0.1970 | 0.2035 | | 0.3031 | 7.11 | 51600 | 0.1943 | 0.2053 | | 0.3004 | 7.16 | 52000 | 0.1942 | 0.1985 | | 0.3004 | 7.22 | 52400 | 0.1941 | 0.2003 | | 0.3029 | 7.27 | 52800 | 0.1936 | 0.2008 | | 0.2915 | 7.33 | 53200 | 0.1935 | 0.1995 | | 0.3005 | 7.38 | 53600 | 0.1943 | 0.2032 | | 0.2984 | 7.44 | 54000 | 0.1913 | 0.1978 | | 0.2984 | 7.5 | 54400 | 0.1907 | 0.1965 | | 0.2978 | 7.55 | 54800 | 0.1881 | 0.1958 | | 0.2944 | 7.61 | 55200 | 0.1887 | 0.1966 | | 0.3004 | 7.66 | 55600 | 0.1870 | 0.1930 | | 0.3099 | 7.72 | 56000 | 0.1906 | 0.1976 | | 0.3099 | 7.77 | 56400 | 0.1856 | 0.1939 | | 0.2917 | 7.83 | 56800 | 0.1883 | 0.1961 | | 0.2924 | 7.88 | 57200 | 0.1864 | 0.1930 | | 0.3061 | 7.94 | 57600 | 0.1831 | 0.1872 | | 0.2834 | 7.99 | 58000 | 0.1835 | 0.1896 | | 0.2834 | 8.05 | 58400 | 0.1828 | 0.1875 | | 0.2807 | 8.1 | 58800 | 0.1820 | 0.1874 | | 0.2765 | 8.16 | 59200 | 0.1807 | 0.1869 | | 0.2737 | 8.21 | 59600 | 0.1810 | 0.1848 | | 0.2722 | 8.27 | 60000 | 0.1795 | 0.1829 | | 0.2722 | 8.32 | 60400 | 0.1785 | 0.1826 | | 0.272 | 8.38 | 60800 | 0.1802 | 0.1836 | | 0.268 | 8.43 | 61200 | 0.1771 | 0.1813 | | 0.2695 | 8.49 | 61600 | 0.1773 | 0.1821 | | 0.2686 | 8.54 | 62000 | 0.1756 | 0.1814 | | 0.2686 | 8.6 | 62400 | 0.1740 | 0.1770 | | 0.2687 | 8.65 | 62800 | 0.1748 | 0.1769 | | 0.2686 | 8.71 | 63200 | 0.1734 | 0.1766 | | 0.2683 | 8.76 | 63600 | 0.1722 | 0.1759 | | 0.2686 | 8.82 | 64000 | 0.1719 | 0.1760 | | 0.2686 | 8.87 | 64400 | 0.1720 | 0.1743 | | 0.2626 | 8.93 | 64800 | 0.1696 | 0.1742 | | 0.2587 | 8.98 | 65200 | 0.1690 | 0.1718 | | 0.2554 | 9.04 | 65600 | 0.1704 | 0.1722 | | 0.2537 | 9.09 | 66000 | 0.1702 | 0.1721 | | 0.2537 | 9.15 | 66400 | 0.1696 | 0.1717 | | 0.2511 | 9.2 | 66800 | 0.1685 | 0.1701 | | 0.2473 | 9.26 | 67200 | 0.1696 | 0.1704 | | 0.2458 | 9.31 | 67600 | 0.1686 | 0.1698 | | 0.2476 | 9.37 | 68000 | 0.1675 | 0.1687 | | 0.2476 | 9.42 | 68400 | 0.1659 | 0.1673 | | 0.2463 | 9.48 | 68800 | 0.1664 | 0.1674 | | 0.2481 | 9.53 | 69200 | 0.1661 | 0.1670 | | 0.2411 | 9.59 | 69600 | 0.1658 | 0.1663 | | 0.2445 | 9.64 | 70000 | 0.1652 | 0.1660 | | 0.2445 | 9.7 | 70400 | 0.1646 | 0.1654 | | 0.2407 | 9.75 | 70800 | 0.1646 | 0.1641 | | 0.2483 | 9.81 | 71200 | 0.1641 | 0.1641 | | 0.245 | 9.86 | 71600 | 0.1635 | 0.1643 | | 0.2402 | 9.92 | 72000 | 0.1638 | 0.1634 | | 0.2402 | 9.98 | 72400 | 0.1633 | 0.1636 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": "it", "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "XLS-R-300m - Italian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "it"}, "metrics": [{"type": "wer", "value": 17.17, "name": "Test WER"}, {"type": "cer", "value": 4.27, "name": "Test CER"}, {"type": "wer", "value": 12.07, "name": "Test WER (+LM)"}, {"type": "cer", "value": 3.52, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "it"}, "metrics": [{"type": "wer", "value": 24.29, "name": "Test WER"}, {"type": "cer", "value": 8.1, "name": "Test CER"}, {"type": "wer", "value": 17.36, "name": "Test WER (+LM)"}, {"type": "cer", "value": 7.94, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "it"}, "metrics": [{"type": "wer", "value": 33.66, "name": "Test WER"}]}]}]}
dbdmg/wav2vec2-xls-r-300m-italian-robust
null
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "it", "dataset:mozilla-foundation/common_voice_7_0", "base_model:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-italian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - IT dataset. It achieves the following results on the evaluation set: - Loss: inf - Wer: 0.1710 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | No log | 0.04 | 100 | inf | 1.0 | | No log | 0.09 | 200 | inf | 0.9983 | | No log | 0.13 | 300 | inf | 0.7672 | | No log | 0.18 | 400 | inf | 0.6919 | | 2.9929 | 0.22 | 500 | inf | 0.6266 | | 2.9929 | 0.26 | 600 | inf | 0.5513 | | 2.9929 | 0.31 | 700 | inf | 0.5081 | | 2.9929 | 0.35 | 800 | inf | 0.4945 | | 2.9929 | 0.39 | 900 | inf | 0.4720 | | 0.5311 | 0.44 | 1000 | inf | 0.4387 | | 0.5311 | 0.48 | 1100 | inf | 0.4411 | | 0.5311 | 0.53 | 1200 | inf | 0.4429 | | 0.5311 | 0.57 | 1300 | inf | 0.4322 | | 0.5311 | 0.61 | 1400 | inf | 0.4532 | | 0.4654 | 0.66 | 1500 | inf | 0.4492 | | 0.4654 | 0.7 | 1600 | inf | 0.3879 | | 0.4654 | 0.75 | 1700 | inf | 0.3836 | | 0.4654 | 0.79 | 1800 | inf | 0.3743 | | 0.4654 | 0.83 | 1900 | inf | 0.3687 | | 0.4254 | 0.88 | 2000 | inf | 0.3793 | | 0.4254 | 0.92 | 2100 | inf | 0.3766 | | 0.4254 | 0.97 | 2200 | inf | 0.3705 | | 0.4254 | 1.01 | 2300 | inf | 0.3272 | | 0.4254 | 1.05 | 2400 | inf | 0.3185 | | 0.3997 | 1.1 | 2500 | inf | 0.3244 | | 0.3997 | 1.14 | 2600 | inf | 0.3082 | | 0.3997 | 1.18 | 2700 | inf | 0.3040 | | 0.3997 | 1.23 | 2800 | inf | 0.3028 | | 0.3997 | 1.27 | 2900 | inf | 0.3112 | | 0.3668 | 1.32 | 3000 | inf | 0.3110 | | 0.3668 | 1.36 | 3100 | inf | 0.3067 | | 0.3668 | 1.4 | 3200 | inf | 0.2961 | | 0.3668 | 1.45 | 3300 | inf | 0.3081 | | 0.3668 | 1.49 | 3400 | inf | 0.2936 | | 0.3645 | 1.54 | 3500 | inf | 0.3037 | | 0.3645 | 1.58 | 3600 | inf | 0.2974 | | 0.3645 | 1.62 | 3700 | inf | 0.3010 | | 0.3645 | 1.67 | 3800 | inf | 0.2985 | | 0.3645 | 1.71 | 3900 | inf | 0.2976 | | 0.3624 | 1.76 | 4000 | inf | 0.2928 | | 0.3624 | 1.8 | 4100 | inf | 0.2860 | | 0.3624 | 1.84 | 4200 | inf | 0.2922 | | 0.3624 | 1.89 | 4300 | inf | 0.2866 | | 0.3624 | 1.93 | 4400 | inf | 0.2776 | | 0.3527 | 1.97 | 4500 | inf | 0.2792 | | 0.3527 | 2.02 | 4600 | inf | 0.2858 | | 0.3527 | 2.06 | 4700 | inf | 0.2767 | | 0.3527 | 2.11 | 4800 | inf | 0.2824 | | 0.3527 | 2.15 | 4900 | inf | 0.2799 | | 0.3162 | 2.19 | 5000 | inf | 0.2673 | | 0.3162 | 2.24 | 5100 | inf | 0.2962 | | 0.3162 | 2.28 | 5200 | inf | 0.2736 | | 0.3162 | 2.33 | 5300 | inf | 0.2652 | | 0.3162 | 2.37 | 5400 | inf | 0.2551 | | 0.3063 | 2.41 | 5500 | inf | 0.2680 | | 0.3063 | 2.46 | 5600 | inf | 0.2558 | | 0.3063 | 2.5 | 5700 | inf | 0.2598 | | 0.3063 | 2.54 | 5800 | inf | 0.2518 | | 0.3063 | 2.59 | 5900 | inf | 0.2541 | | 0.2913 | 2.63 | 6000 | inf | 0.2507 | | 0.2913 | 2.68 | 6100 | inf | 0.2500 | | 0.2913 | 2.72 | 6200 | inf | 0.2435 | | 0.2913 | 2.76 | 6300 | inf | 0.2376 | | 0.2913 | 2.81 | 6400 | inf | 0.2348 | | 0.2797 | 2.85 | 6500 | inf | 0.2512 | | 0.2797 | 2.9 | 6600 | inf | 0.2382 | | 0.2797 | 2.94 | 6700 | inf | 0.2523 | | 0.2797 | 2.98 | 6800 | inf | 0.2522 | | 0.2797 | 3.03 | 6900 | inf | 0.2409 | | 0.2766 | 3.07 | 7000 | inf | 0.2453 | | 0.2766 | 3.12 | 7100 | inf | 0.2326 | | 0.2766 | 3.16 | 7200 | inf | 0.2286 | | 0.2766 | 3.2 | 7300 | inf | 0.2342 | | 0.2766 | 3.25 | 7400 | inf | 0.2305 | | 0.2468 | 3.29 | 7500 | inf | 0.2238 | | 0.2468 | 3.33 | 7600 | inf | 0.2321 | | 0.2468 | 3.38 | 7700 | inf | 0.2305 | | 0.2468 | 3.42 | 7800 | inf | 0.2174 | | 0.2468 | 3.47 | 7900 | inf | 0.2201 | | 0.2439 | 3.51 | 8000 | inf | 0.2133 | | 0.2439 | 3.55 | 8100 | inf | 0.2217 | | 0.2439 | 3.6 | 8200 | inf | 0.2189 | | 0.2439 | 3.64 | 8300 | inf | 0.2105 | | 0.2439 | 3.69 | 8400 | inf | 0.2118 | | 0.2357 | 3.73 | 8500 | inf | 0.2093 | | 0.2357 | 3.77 | 8600 | inf | 0.2103 | | 0.2357 | 3.82 | 8700 | inf | 0.2035 | | 0.2357 | 3.86 | 8800 | inf | 0.2019 | | 0.2357 | 3.91 | 8900 | inf | 0.2032 | | 0.2217 | 3.95 | 9000 | inf | 0.2056 | | 0.2217 | 3.99 | 9100 | inf | 0.2022 | | 0.2217 | 4.04 | 9200 | inf | 0.1932 | | 0.2217 | 4.08 | 9300 | inf | 0.1935 | | 0.2217 | 4.12 | 9400 | inf | 0.1906 | | 0.2025 | 4.17 | 9500 | inf | 0.1879 | | 0.2025 | 4.21 | 9600 | inf | 0.1882 | | 0.2025 | 4.26 | 9700 | inf | 0.1854 | | 0.2025 | 4.3 | 9800 | inf | 0.1865 | | 0.2025 | 4.34 | 9900 | inf | 0.1844 | | 0.1869 | 4.39 | 10000 | inf | 0.1822 | | 0.1869 | 4.43 | 10100 | inf | 0.1815 | | 0.1869 | 4.48 | 10200 | inf | 0.1812 | | 0.1869 | 4.52 | 10300 | inf | 0.1792 | | 0.1869 | 4.56 | 10400 | inf | 0.1797 | | 0.1863 | 4.61 | 10500 | inf | 0.1774 | | 0.1863 | 4.65 | 10600 | inf | 0.1767 | | 0.1863 | 4.7 | 10700 | inf | 0.1765 | | 0.1863 | 4.74 | 10800 | inf | 0.1753 | | 0.1863 | 4.78 | 10900 | inf | 0.1731 | | 0.178 | 4.83 | 11000 | inf | 0.1727 | | 0.178 | 4.87 | 11100 | inf | 0.1724 | | 0.178 | 4.91 | 11200 | inf | 0.1722 | | 0.178 | 4.96 | 11300 | inf | 0.1712 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["it"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300m - Italian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "it"}, "metrics": [{"type": "wer", "value": 19.44, "name": "Test WER"}, {"type": "cer", "value": 4.47, "name": "Test CER"}, {"type": "wer", "value": 14.08, "name": "Test WER (+LM)"}, {"type": "cer", "value": 3.67, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "it"}, "metrics": [{"type": "wer", "value": 31.01, "name": "Test WER"}, {"type": "cer", "value": 9.27, "name": "Test CER"}, {"type": "wer", "value": 22.09, "name": "Test WER (+LM)"}, {"type": "cer", "value": 7.9, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "it"}, "metrics": [{"type": "wer", "value": 38.07, "name": "Test WER"}]}]}]}
dbdmg/wav2vec2-xls-r-300m-italian
null
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "it", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# algebra_linear_1d --- language: en datasets: - algebra_linear_1d --- This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [math_dataset/algebra_linear_1d](https://www.tensorflow.org/datasets/catalog/math_dataset#mathdatasetalgebra_linear_1d_default_config) for solving **algebra 1d equations** mission. To load the model: (necessary packages: !pip install transformers sentencepiece) ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("dbernsohn/algebra_linear_1d") model = AutoModelWithLMHead.from_pretrained("dbernsohn/algebra_linear_1d") ``` You can then use this model to solve algebra 1d equations into numbers. ```python query = "Solve 0 = 1026*x - 2474 + 46592 for x" input_text = f"{query} </s>" features = tokenizer([input_text], return_tensors='pt') model.to('cuda') output = model.generate(input_ids=features['input_ids'].cuda(), attention_mask=features['attention_mask'].cuda()) tokenizer.decode(output[0]) # <pad> -41</s> ``` Another examples: + Solve 1112*r + 1418*r - 5220 = 587*r - 28536 for r. + Answer: -12 Pred: -12 ---- + Solve -119*k + 6*k - 117 - 352 = 322 for k. + Answer: -7 Pred: -7 ---- + Solve -547 = -62*t + 437 - 798 for t. + Answer: 3 Pred: 3 ---- + Solve 3*j - 3*j + 0*j - 4802 = 98*j for j. + Answer: -49 Pred: -49 ---- + Solve 3047*n - 6130*n - 1700 = -3049*n for n. + Answer: -50 Pred: -50 ---- + Solve 121*i + 1690 = 76*i - 128*i + 133 for i. + Answer: -9 Pred: -9 The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/MathLM) > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
{}
dbernsohn/algebra_linear_1d
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# algebra_linear_1d_composed --- language: en datasets: - algebra_linear_1d_composed --- This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [math_dataset/algebra_linear_1d_composed](https://www.tensorflow.org/datasets/catalog/math_dataset#mathdatasetalgebra_linear_1d_composed) for solving **algebra linear 1d composed equations** mission. To load the model: (necessary packages: !pip install transformers sentencepiece) ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("dbernsohn/algebra_linear_1d_composed") model = AutoModelWithLMHead.from_pretrained("dbernsohn/algebra_linear_1d_composed") ``` You can then use this model to solve algebra 1d equations into numbers. ```python query = "Suppose -d = 5 - 16. Let b = -579 + 584. Solve -b*c + 36 = d for c." input_text = f"{query} </s>" features = tokenizer([input_text], return_tensors='pt') model.to('cuda') output = model.generate(input_ids=features['input_ids'].cuda(), attention_mask=features['attention_mask'].cuda()) tokenizer.decode(output[0]) # <pad> 5</s> ``` Another examples: + Suppose -d = 5 - 16. Let b = -579 + 584. Solve -b*c + 36 = d for c. + Answer: 5 Pred: 5 ---- + Suppose 3*v - l + 9 = 4*v, 0 = -5*v + 5*l - 5. Let f(s) = 3*s**2 + 1. Let g be f(-1). Suppose 63 = g*x - x. Solve -5*i + v + x = 0 for i. + Answer: 5 Pred: 5 ---- + Let w be 2 - (0 - 0)/(-2). Let f = -110 - -110. Suppose f*m - 4*m + 3*m = 0. Solve m*v = -w*v for v. + Answer: 0 Pred: 0 ---- + Let a(h) = -34*h**3 - 15 + 3*h + 36*h**3 + 8*h**2 + 5*h**2. Let r be a(-6). Solve 2*z = r*z for z. + Answer: 0 Pred: 0 ---- + Suppose -3*p + 24 = -3*c, 0*c + 6 = -2*c. Suppose -67 = 4*i + 289. Let t = i + 94. Solve t = 2*y - p for y. + Answer: 5 Pred: 5 ---- + Let b = -36 + 53. Suppose -7*u - b = -73. Solve j + 3*j = -u for j. + Answer: -2 Pred: -2 ---- + Let h be 8*((-2)/2 + 14)*1. Let y = -101 + h. Solve y*p = -p for p. + Answer: 0 Pred: 0 ---- + Let b = 178 - 79. Let s be 9/(-1 - 2 - b/(-22)). Solve s = -k - k for k. + Answer: -3 Pred: -3 ---- + Suppose 31 = -4*z + 11, -3*k - 5*z - 22 = 0. Solve 23 = -11*p + k for p. + Answer: -2 Pred: -2 The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/MathLM) > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
{}
dbernsohn/algebra_linear_1d_composed
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# roberta-go --- language: Go datasets: - code_search_net --- This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **Golang** Mask Language Model mission. To load the model: (necessary packages: !pip install transformers sentencepiece) ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-go") model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-go") fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) ``` You can then use this model to fill masked words in a Java code. ```python code = """ package main import ( "fmt" "runtime" ) func main() { fmt.Print("Go runs on ") switch os := runtime.<mask>; os { case "darwin": fmt.Println("OS X.") case "linux": fmt.Println("Linux.") default: // freebsd, openbsd, // plan9, windows... fmt.Printf("%s.\n", os) } } """.lstrip() pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)} sorted(pred.items(), key=lambda kv: kv[1], reverse=True) [('GOOS', 0.11810332536697388), ('FileInfo', 0.04276798665523529), ('Stdout', 0.03572738170623779), ('Getenv', 0.025064032524824142), ('FileMode', 0.01462600938975811)] ``` The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM) > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
{}
dbernsohn/roberta-go
null
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# roberta-java --- language: Java datasets: - code_search_net --- This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **Java** Mask Language Model mission. To load the model: (necessary packages: !pip install transformers sentencepiece) ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-java") model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-java") fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) ``` You can then use this model to fill masked words in a Java code. ```python code = """ String[] cars = {"Volvo", "BMW", "Ford", "Mazda"}; for (String i : cars) { System.out.<mask>(i); } """.lstrip() pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)} sorted(pred.items(), key=lambda kv: kv[1], reverse=True) # [('println', 0.32571351528167725), # ('get', 0.2897663116455078), # ('remove', 0.0637081190943718), # ('exit', 0.058875661343336105), # ('print', 0.034190207719802856)] ``` The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM) > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
{}
dbernsohn/roberta-java
null
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# roberta-javascript --- language: javascript datasets: - code_search_net --- This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **javascript** Mask Language Model mission. To load the model: (necessary packages: !pip install transformers sentencepiece) ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-javascript") model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-javascript") fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) ``` You can then use this model to fill masked words in a Java code. ```python code = """ var i; for (i = 0; i < cars.<mask>; i++) { text += cars[i] + "<br>"; } """.lstrip() pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)} sorted(pred.items(), key=lambda kv: kv[1], reverse=True) # [('length', 0.9959614872932434), # ('i', 0.00027875584783032537), # ('len', 0.0002283261710545048), # ('nodeType', 0.00013731322542298585), # ('index', 7.5289819505997e-05)] ``` The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM) > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
{}
dbernsohn/roberta-javascript
null
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# roberta-php --- language: php datasets: - code_search_net --- This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **php** Mask Language Model mission. To load the model: (necessary packages: !pip install transformers sentencepiece) ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-php") model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-php") fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) ``` You can then use this model to fill masked words in a Java code. ```python code = """ $people = array( array('name' => 'Kalle', 'salt' => 856412), array('name' => 'Pierre', 'salt' => 215863) ); for($i = 0; $i < count($<mask>); ++$i) { $people[$i]['salt'] = mt_rand(000000, 999999); } """.lstrip() pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)} sorted(pred.items(), key=lambda kv: kv[1], reverse=True) # [('people', 0.785636842250824), # ('parts', 0.006270722020417452), # ('id', 0.0035842324141412973), # ('data', 0.0025512021966278553), # ('config', 0.002258970635011792)] ``` The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM) > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
{}
dbernsohn/roberta-php
null
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# roberta-python --- language: python datasets: - code_search_net --- This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **Python** Mask Language Model mission. To load the model: (necessary packages: !pip install transformers sentencepiece) ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-python") model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-python") fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) ``` You can then use this model to fill masked words in a Python code. ```python code = """ new_dict = {} for k, v in my_dict.<mask>(): new_dict[k] = v**2 """.lstrip() pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)} sorted(pred.items(), key=lambda kv: kv[1], reverse=True) # [('items', 0.7376779913902283), # ('keys', 0.16238391399383545), # ('values', 0.03965481370687485), # ('iteritems', 0.03346433863043785), # ('splitlines', 0.0032723243348300457)] ``` The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM) > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
{}
dbernsohn/roberta-python
null
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# measurement_time --- language: en datasets: - measurement_time --- This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [math_dataset/measurement_time](https://www.tensorflow.org/datasets/catalog/math_dataset#mathdatasetmeasurement_time) for solving **measurement time equations** mission. To load the model: (necessary packages: !pip install transformers sentencepiece) ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("dbernsohn/t5_measurement_time") model = AutoModelWithLMHead.from_pretrained("dbernsohn/t5_measurement_time") ``` You can then use this model to solve algebra 1d equations into numbers. ```python query = "How many minutes are there between 2:09 PM and 2:27 PM?" input_text = f"{query} </s>" features = tokenizer([input_text], return_tensors='pt') model.to('cuda') output = model.generate(input_ids=features['input_ids'].cuda(), attention_mask=features['attention_mask'].cuda()) tokenizer.decode(output[0]) # <pad> 18</s> ``` Another examples: + How many minutes are there between 2:09 PM and 2:27 PM? + Answer: 18 Pred: 18 ---- + What is 116 minutes after 10:06 AM? + Answer: 12:02 PM Pred: 12:02 PM ---- + What is 608 minutes after 3:14 PM? + Answer: 1:22 AM Pred: 1:22 AM ---- + What is 64 minutes before 9:16 AM? + Answer: 8:12 AM Pred: 8:12 AM ---- + What is 427 minutes before 4:27 AM? + Answer: 9:20 PM Pred: 9:20 PM ---- + How many minutes are there between 6:36 PM and 12:15 AM? + Answer: 339 Pred: 339 ---- + What is 554 minutes before 5:24 PM? + Answer: 8:10 AM Pred: 8:10 AM ---- + What is 307 minutes after 5:15 AM? + Answer: 10:22 AM Pred: 10:22 AM The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/MathLM) > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
{}
dbernsohn/t5_measurement_time
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# numbers_gcd --- language: en datasets: - numbers_gcd --- This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [math_dataset/numbers_gcd](https://www.tensorflow.org/datasets/catalog/math_dataset#mathdatasetnumbers_gcd) for solving **greatest common divisor** mission. To load the model: (necessary packages: !pip install transformers sentencepiece) ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("dbernsohn/t5_numbers_gcd") model = AutoModelWithLMHead.from_pretrained("dbernsohn/t5_numbers_gcd") ``` You can then use this model to solve algebra 1d equations into numbers. ```python query = "What is the highest common factor of 4210884 and 72?" input_text = f"{query} </s>" features = tokenizer([input_text], return_tensors='pt') model.to('cuda') output = model.generate(input_ids=features['input_ids'].cuda(), attention_mask=features['attention_mask'].cuda()) tokenizer.decode(output[0]) # <pad> 36</s> ``` Another examples: + Calculate the greatest common factor of 3470 and 97090. + Answer: 10 Pred: 10 ---- + Calculate the highest common factor of 3480 and 775431. + Answer: 87 Pred: 87 ---- + What is the highest common divisor of 26 and 88049? + Answer: 13 Pred: 13 ---- + Calculate the highest common factor of 1416 and 24203688. + Answer: 1416 Pred: 1416 ---- + Calculate the highest common divisor of 124 and 69445828. + Answer: 124 Pred: 124 ---- + What is the greatest common factor of 657906 and 470? + Answer: 94 Pred: 94 ---- + What is the highest common factor of 4210884 and 72? + Answer: 36 Pred: 36 The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/MathLM) > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
{}
dbernsohn/t5_numbers_gcd
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# t5_wikisql_SQL2en --- language: en datasets: - wikisql --- This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [wikisql dataset](https://huggingface.co/datasets/wikisql) for **SQL** to **English** **translation** text2text mission. To load the model: (necessary packages: !pip install transformers sentencepiece) ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("dbernsohn/t5_wikisql_SQL2en") model = AutoModelWithLMHead.from_pretrained("dbernsohn/t5_wikisql_SQL2en") ``` You can then use this model to translate SQL queries into plain english. ```python query = "SELECT people FROM peoples where age > 10" input_text = f"translate SQL to English: {query} </s>" features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'].cuda(), attention_mask=features['attention_mask'].cuda()) tokenizer.decode(output[0]) # Output: "What people are older than 10?" ``` The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/SQLM) > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
{}
dbernsohn/t5_wikisql_SQL2en
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# t5_wikisql_en2SQL --- language: en datasets: - wikisql --- This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [wikisql dataset](https://huggingface.co/datasets/wikisql) for **English** to **SQL** **translation** text2text mission. To load the model: (necessary packages: !pip install transformers sentencepiece) ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("dbernsohn/t5_wikisql_en2SQL") model = AutoModelWithLMHead.from_pretrained("dbernsohn/t5_wikisql_en2SQL") ``` You can then use this model to translate SQL queries into plain english. ```python query = "what are the names of all the people in the USA?" input_text = f"translate English to Sql: {query} </s>" features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'].cuda(), attention_mask=features['attention_mask'].cuda()) tokenizer.decode(output[0]) # Output: "SELECT Name FROM table WHERE Country = USA" ``` The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/SQLM) > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
{}
dbernsohn/t5_wikisql_en2SQL
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
generic
# Feature Extraction repository template This is a template repository for feature extraction to support generic inference with Hugging Face Hub generic Inference API. There are two required steps 1. Specify the requirements by defining a `requirements.txt` file. 2. Implement the `pipeline.py` `__init__` and `__call__` methods. These methods are called by the Inference API. The `__init__` method should load the model and preload all the elements needed for inference (model, processors, tokenizers, etc.). This is only called once. The `__call__` method performs the actual inference. Make sure to follow the same input/output specifications defined in the template for the pipeline to work. Example repos * https://huggingface.co/osanseviero/fasttext_english ## How to start First create a repo in https://hf.co/new. Then clone this template and push it to your repo. ``` git clone https://huggingface.co/templates/feature-extraction cd feature-extraction git remote set-url origin https://huggingface.co/$YOUR_USER/$YOUR_REPO_NAME git push --force ```
{"library_name": "generic", "tags": ["feature-extraction"]}
dbguilherme/teste
null
[ "generic", "feature-extraction", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dbguilherme/teste01
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dbiosca/wav2vec2-large-xlsr-53-demo-colab
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
{}
dbmdz/bert-base-cased-finetuned-conll03-english
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# Historic Language Models (HLMs) ## Languages Our Historic Language Models Zoo contains support for the following languages - incl. their training data source: | Language | Training data | Size | -------- | ------------- | ---- | German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered) | French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered) | English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered) | Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB | Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB ## Models At the moment, the following models are available on the model hub: | Model identifier | Model Hub link | --------------------------------------------- | -------------------------------------------------------------------------- | `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) | `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased) | `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased) | `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased) # Corpora Stats ## German Europeana Corpus We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size and use less-noisier data: | OCR confidence | Size | -------------- | ---- | **0.60** | 28GB | 0.65 | 18GB | 0.70 | 13GB For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution: ![German Europeana Corpus Stats](stats/figures/german_europeana_corpus_stats.png) ## French Europeana Corpus Like German, we use different ocr confidence thresholds: | OCR confidence | Size | -------------- | ---- | 0.60 | 31GB | 0.65 | 27GB | **0.70** | 27GB | 0.75 | 23GB | 0.80 | 11GB For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution: ![French Europeana Corpus Stats](stats/figures/french_europeana_corpus_stats.png) ## British Library Corpus Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering: | Years | Size | ----------------- | ---- | ALL | 24GB | >= 1800 && < 1900 | 24GB We use the year filtered variant. The following plot shows a tokens per year distribution: ![British Library Corpus Stats](stats/figures/bl_corpus_stats.png) ## Finnish Europeana Corpus | OCR confidence | Size | -------------- | ---- | 0.60 | 1.2GB The following plot shows a tokens per year distribution: ![Finnish Europeana Corpus Stats](stats/figures/finnish_europeana_corpus_stats.png) ## Swedish Europeana Corpus | OCR confidence | Size | -------------- | ---- | 0.60 | 1.1GB The following plot shows a tokens per year distribution: ![Swedish Europeana Corpus Stats](stats/figures/swedish_europeana_corpus_stats.png) ## All Corpora The following plot shows a tokens per year distribution of the complete training corpus: ![All Corpora Stats](stats/figures/all_corpus_stats.png) # Multilingual Vocab generation For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB. The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs: | Language | Size | -------- | ---- | German | 10GB | French | 10GB | English | 10GB | Finnish | 9.5GB | Swedish | 9.7GB We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora: | Language | NER corpora | -------- | ------------------ | German | CLEF-HIPE, NewsEye | French | CLEF-HIPE, NewsEye | English | CLEF-HIPE | Finnish | NewsEye | Swedish | NewsEye Breakdown of subword fertility rate and unknown portion per language for the 32k vocab: | Language | Subword fertility | Unknown portion | -------- | ------------------ | --------------- | German | 1.43 | 0.0004 | French | 1.25 | 0.0001 | English | 1.25 | 0.0 | Finnish | 1.69 | 0.0007 | Swedish | 1.43 | 0.0 Breakdown of subword fertility rate and unknown portion per language for the 64k vocab: | Language | Subword fertility | Unknown portion | -------- | ------------------ | --------------- | German | 1.31 | 0.0004 | French | 1.16 | 0.0001 | English | 1.17 | 0.0 | Finnish | 1.54 | 0.0007 | Swedish | 1.32 | 0.0 # Final pretraining corpora We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here: | Language | Size | -------- | ---- | German | 28GB | French | 27GB | English | 24GB | Finnish | 27GB | Swedish | 27GB Total size is 130GB. # Pretraining ## Multilingual model We train a multilingual BERT model using the 32k vocab with the official BERT implementation on a v3-32 TPU using the following parameters: ```bash python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \ --output_dir gs://histolectra/bert-base-historic-multilingual-cased \ --bert_config_file ./config.json \ --max_seq_length=512 \ --max_predictions_per_seq=75 \ --do_train=True \ --train_batch_size=128 \ --num_train_steps=3000000 \ --learning_rate=1e-4 \ --save_checkpoints_steps=100000 \ --keep_checkpoint_max=20 \ --use_tpu=True \ --tpu_name=electra-2 \ --num_tpu_cores=32 ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_historic-multilingual.png) ## English model The English BERT model - with texts from British Library corpus - was trained with the Hugging Face JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command: ```bash python3 run_mlm_flax.py --model_type bert \ --config_name /mnt/datasets/bert-base-historic-english-cased/ \ --tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \ --train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \ --validation_file /mnt/datasets/bl-corpus/english_validation.txt \ --max_seq_length 512 \ --per_device_train_batch_size 16 \ --learning_rate 1e-4 \ --num_train_epochs 10 \ --preprocessing_num_workers 96 \ --output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \ --save_steps 2500 \ --eval_steps 2500 \ --warmup_steps 10000 \ --line_by_line \ --pad_to_max_length ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_historic_english.png) ## Finnish model The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command: ```bash python3 run_mlm_flax.py --model_type bert \ --config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \ --tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \ --train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \ --validation_file /mnt/datasets/hlms/finnish_validation.txt \ --max_seq_length 512 \ --per_device_train_batch_size 16 \ --learning_rate 1e-4 \ --num_train_epochs 40 \ --preprocessing_num_workers 96 \ --output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \ --save_steps 2500 \ --eval_steps 2500 \ --warmup_steps 10000 \ --line_by_line \ --pad_to_max_length ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_finnish_europeana.png) ## Swedish model The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command: ```bash python3 run_mlm_flax.py --model_type bert \ --config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \ --tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \ --train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \ --validation_file /mnt/datasets/hlms/swedish_validation.txt \ --max_seq_length 512 \ --per_device_train_batch_size 16 \ --learning_rate 1e-4 \ --num_train_epochs 40 \ --preprocessing_num_workers 96 \ --output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \ --save_steps 2500 \ --eval_steps 2500 \ --warmup_steps 10000 \ --line_by_line \ --pad_to_max_length ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_swedish_europeana.png) # Acknowledgments Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "finnish", "license": "mit", "widget": [{"text": "T\u00e4k\u00e4l\u00e4inen sanomalehdist\u00f6 [MASK] erit - t\u00e4in"}]}
dbmdz/bert-base-finnish-europeana-cased
null
[ "transformers", "pytorch", "jax", "tensorboard", "bert", "fill-mask", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# 🤗 + 📚 dbmdz BERT model In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources French Europeana BERT models 🎉 # French Europeana BERT We extracted all French texts using the `language` metadata attribute from the Europeana corpus. The resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens. Based on the metadata information, texts from the 18th - 20th century are mainly included in the training corpus. Detailed information about the data and pretraining steps can be found in [this repository](https://github.com/stefan-it/europeana-bert). ## Model weights BERT model weights for PyTorch and TensorFlow are available. * French Europeana BERT: `dbmdz/bert-base-french-europeana-cased` - [model hub page](https://huggingface.co/dbmdz/bert-base-french-europeana-cased/tree/main) ## Results For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert). ## Usage With Transformers >= 2.3 our French Europeana BERT model can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-french-europeana-cased") model = AutoModel.from_pretrained("dbmdz/bert-base-french-europeana-cased") ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT model just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download our model from their S3 storage 🤗
{"language": "fr", "license": "mit", "tags": ["historic french"]}
dbmdz/bert-base-french-europeana-cased
null
[ "transformers", "pytorch", "tf", "jax", "bert", "historic french", "fr", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# 🤗 + 📚 dbmdz German BERT models In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources another German BERT models 🎉 # German BERT ## Stats In addition to the recently released [German BERT](https://deepset.ai/german-bert) model by [deepset](https://deepset.ai/) we provide another German-language model. The source data for the model consists of a recent Wikipedia dump, EU Bookshop corpus, Open Subtitles, CommonCrawl, ParaCrawl and News Crawl. This results in a dataset with a size of 16GB and 2,350,234,427 tokens. For sentence splitting, we use [spacy](https://spacy.io/). Our preprocessing steps (sentence piece model for vocab generation) follow those used for training [SciBERT](https://github.com/allenai/scibert). The model is trained with an initial sequence length of 512 subwords and was performed for 1.5M steps. This release includes both cased and uncased models. ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | -------------------------------- | --------------------------------------------------------------------------------------------------------------- | `bert-base-german-dbmdz-cased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-vocab.txt) | `bert-base-german-dbmdz-uncased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-vocab.txt) ## Usage With Transformers >= 2.3 our German BERT models can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased") model = AutoModel.from_pretrained("dbmdz/bert-base-german-cased") ``` ## Results For results on downstream tasks like NER or PoS tagging, please refer to [this repository](https://github.com/stefan-it/fine-tuned-berts-seq). # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "de", "license": "mit"}
dbmdz/bert-base-german-cased
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# 🤗 + 📚 dbmdz BERT models In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources German Europeana BERT models 🎉 # German Europeana BERT We use the open source [Europeana newspapers](http://www.europeana-newspapers.eu/) that were provided by *The European Library*. The final training corpus has a size of 51GB and consists of 8,035,986,369 tokens. Detailed information about the data and pretraining steps can be found in [this repository](https://github.com/stefan-it/europeana-bert). ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | ------------------------------------------ | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-german-europeana-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-cased/vocab.txt) ## Results For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert). ## Usage With Transformers >= 2.3 our German Europeana BERT models can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-europeana-cased") model = AutoModel.from_pretrained("dbmdz/bert-base-german-europeana-cased") ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "de", "license": "mit", "tags": ["historic german"]}
dbmdz/bert-base-german-europeana-cased
null
[ "transformers", "pytorch", "tf", "jax", "bert", "historic german", "de", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# 🤗 + 📚 dbmdz BERT models In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources German Europeana BERT models 🎉 # German Europeana BERT We use the open source [Europeana newspapers](http://www.europeana-newspapers.eu/) that were provided by *The European Library*. The final training corpus has a size of 51GB and consists of 8,035,986,369 tokens. Detailed information about the data and pretraining steps can be found in [this repository](https://github.com/stefan-it/europeana-bert). ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | ------------------------------------------ | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-german-europeana-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-uncased/vocab.txt) ## Results For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert). ## Usage With Transformers >= 2.3 our German Europeana BERT models can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-europeana-uncased") model = AutoModel.from_pretrained("dbmdz/bert-base-german-europeana-uncased") ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "de", "license": "mit", "tags": ["historic german"]}
dbmdz/bert-base-german-europeana-uncased
null
[ "transformers", "pytorch", "tf", "jax", "bert", "historic german", "de", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# 🤗 + 📚 dbmdz German BERT models In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources another German BERT models 🎉 # German BERT ## Stats In addition to the recently released [German BERT](https://deepset.ai/german-bert) model by [deepset](https://deepset.ai/) we provide another German-language model. The source data for the model consists of a recent Wikipedia dump, EU Bookshop corpus, Open Subtitles, CommonCrawl, ParaCrawl and News Crawl. This results in a dataset with a size of 16GB and 2,350,234,427 tokens. For sentence splitting, we use [spacy](https://spacy.io/). Our preprocessing steps (sentence piece model for vocab generation) follow those used for training [SciBERT](https://github.com/allenai/scibert). The model is trained with an initial sequence length of 512 subwords and was performed for 1.5M steps. This release includes both cased and uncased models. ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | -------------------------------- | --------------------------------------------------------------------------------------------------------------- | `bert-base-german-dbmdz-cased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-vocab.txt) | `bert-base-german-dbmdz-uncased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-vocab.txt) ## Usage With Transformers >= 2.3 our German BERT models can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased") model = AutoModel.from_pretrained("dbmdz/bert-base-german-cased") ``` ## Results For results on downstream tasks like NER or PoS tagging, please refer to [this repository](https://github.com/stefan-it/fine-tuned-berts-seq). # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "de", "license": "mit"}
dbmdz/bert-base-german-uncased
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# Language Model for Historic Dutch In this repository we open source a language model for Historic Dutch, trained on the [Delpher Corpus](https://www.delpher.nl/over-delpher/delpher-open-krantenarchief/download-teksten-kranten-1618-1879\), that include digitized texts from Dutch newspapers, ranging from 1618 to 1879. # Changelog * 13.12.2021: Initial version of this repository. # Model Zoo The following models for Historic Dutch are available on the Hugging Face Model Hub: | Model identifier | Model Hub link | -------------------------------------- | ------------------------------------------------------------------- | `dbmdz/bert-base-historic-dutch-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-dutch-cased) # Stats The download urls for all archives can be found [here](delpher-corpus.urls). We then used the awesome `alto-tools` from [this](https://github.com/cneud/alto-tools) repository to extract plain text. The following table shows the size overview per year range: | Period | Extracted plain text size | --------- | -------------------------: | 1618-1699 | 170MB | 1700-1709 | 103MB | 1710-1719 | 65MB | 1720-1729 | 137MB | 1730-1739 | 144MB | 1740-1749 | 188MB | 1750-1759 | 171MB | 1760-1769 | 235MB | 1770-1779 | 271MB | 1780-1789 | 414MB | 1790-1799 | 614MB | 1800-1809 | 734MB | 1810-1819 | 807MB | 1820-1829 | 987MB | 1830-1839 | 1.7GB | 1840-1849 | 2.2GB | 1850-1854 | 1.3GB | 1855-1859 | 1.7GB | 1860-1864 | 2.0GB | 1865-1869 | 2.3GB | 1870-1874 | 1.9GB | 1875-1876 | 867MB | 1877-1879 | 1.9GB The total training corpus consists of 427,181,269 sentences and 3,509,581,683 tokens (counted via `wc`), resulting in a total corpus size of 21GB. The following figure shows an overview of the number of chars per year distribution: ![Delpher Corpus Stats](figures/delpher_corpus_stats.png) # Language Model Pretraining We use the official [BERT](https://github.com/google-research/bert) implementation using the following command to train the model: ```bash python3 run_pretraining.py --input_file gs://delpher-bert/tfrecords/*.tfrecord \ --output_dir gs://delpher-bert/bert-base-historic-dutch-cased \ --bert_config_file ./config.json \ --max_seq_length=512 \ --max_predictions_per_seq=75 \ --do_train=True \ --train_batch_size=128 \ --num_train_steps=3000000 \ --learning_rate=1e-4 \ --save_checkpoints_steps=100000 \ --keep_checkpoint_max=20 \ --use_tpu=True \ --tpu_name=electra-2 \ --num_tpu_cores=32 ``` We train the model for 3M steps using a total batch size of 128 on a v3-32 TPU. The pretraining loss curve can be seen in the next figure: ![Delpher Pretraining Loss Curve](figures/training_loss.png) # Evaluation We evaluate our model on the preprocessed Europeana NER dataset for Dutch, that was presented in the ["Data Centric Domain Adaptation for Historical Text with OCR Errors"](https://github.com/stefan-it/historic-domain-adaptation-icdar) paper. The data is available in their repository. We perform a hyper-parameter search for: * Batch sizes: `[4, 8]` * Learning rates: `[3e-5, 5e-5]` * Number of epochs: `[5, 10]` and report averaged F1-Score over 5 runs with different seeds. We also include [hmBERT](https://github.com/stefan-it/clef-hipe/blob/main/hlms.md) as baseline model. Results: | Model | F1-Score (Dev / Test) | ------------------- | --------------------- | hmBERT | (82.73) / 81.34 | Maerz et al. (2021) | - / 84.2 | Ours | (89.73) / 87.45 # License All models are licensed under [MIT](LICENSE). # Acknowledgments Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️ We thank [Clemens Neudecker](https://github.com/cneud) for maintaining the amazing [ALTO tools](https://github.com/cneud/alto-tools) that were used for parsing the Delpher Corpus XML files. Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "dutch", "license": "mit", "widget": [{"text": "de [MASK] vau Financien, in hec vorige jaar, da inkomswi"}]}
dbmdz/bert-base-historic-dutch-cased
null
[ "transformers", "pytorch", "tf", "tensorboard", "bert", "fill-mask", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
🚨 Notice: After re-checking this model again, it seems that the model is not working very well. E.g. MLM predictions are very likely to predict `[UNK]` token, which is actually not good. We will update this model soon. For now, please use the [`bigscience-historical-texts/bert-base-blbooks-cased`](https://huggingface.co/bigscience-historical-texts/bert-base-blbooks-cased) instead, as it was pretrained on the same corpus.
{"language": "en", "license": "mit", "widget": [{"text": "and I cannot conceive the reafon why [MASK] hath"}]}
dbmdz/bert-base-historic-english-cased
null
[ "transformers", "pytorch", "jax", "tensorboard", "safetensors", "bert", "fill-mask", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# hmBERT: Historical Multilingual Language Models for Named Entity Recognition More information about our hmBERT model can be found in our new paper: ["hmBERT: Historical Multilingual Language Models for Named Entity Recognition"](https://arxiv.org/abs/2205.15575). ## Languages Our Historic Language Models Zoo contains support for the following languages - incl. their training data source: | Language | Training data | Size | -------- | ------------- | ---- | German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered) | French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered) | English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered) | Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB | Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB ## Smaller Models We have also released smaller models for the multilingual model: | Model identifier | Model Hub link | ----------------------------------------------- | --------------------------------------------------------------------------- | `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased) | `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased) | `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased) | `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) # Corpora Stats ## German Europeana Corpus We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size and use less-noisier data: | OCR confidence | Size | -------------- | ---- | **0.60** | 28GB | 0.65 | 18GB | 0.70 | 13GB For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution: ![German Europeana Corpus Stats](stats/figures/german_europeana_corpus_stats.png) ## French Europeana Corpus Like German, we use different ocr confidence thresholds: | OCR confidence | Size | -------------- | ---- | 0.60 | 31GB | 0.65 | 27GB | **0.70** | 27GB | 0.75 | 23GB | 0.80 | 11GB For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution: ![French Europeana Corpus Stats](stats/figures/french_europeana_corpus_stats.png) ## British Library Corpus Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering: | Years | Size | ----------------- | ---- | ALL | 24GB | >= 1800 && < 1900 | 24GB We use the year filtered variant. The following plot shows a tokens per year distribution: ![British Library Corpus Stats](stats/figures/bl_corpus_stats.png) ## Finnish Europeana Corpus | OCR confidence | Size | -------------- | ---- | 0.60 | 1.2GB The following plot shows a tokens per year distribution: ![Finnish Europeana Corpus Stats](stats/figures/finnish_europeana_corpus_stats.png) ## Swedish Europeana Corpus | OCR confidence | Size | -------------- | ---- | 0.60 | 1.1GB The following plot shows a tokens per year distribution: ![Swedish Europeana Corpus Stats](stats/figures/swedish_europeana_corpus_stats.png) ## All Corpora The following plot shows a tokens per year distribution of the complete training corpus: ![All Corpora Stats](stats/figures/all_corpus_stats.png) # Multilingual Vocab generation For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB. The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs: | Language | Size | -------- | ---- | German | 10GB | French | 10GB | English | 10GB | Finnish | 9.5GB | Swedish | 9.7GB We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora: | Language | NER corpora | -------- | ------------------ | German | CLEF-HIPE, NewsEye | French | CLEF-HIPE, NewsEye | English | CLEF-HIPE | Finnish | NewsEye | Swedish | NewsEye Breakdown of subword fertility rate and unknown portion per language for the 32k vocab: | Language | Subword fertility | Unknown portion | -------- | ------------------ | --------------- | German | 1.43 | 0.0004 | French | 1.25 | 0.0001 | English | 1.25 | 0.0 | Finnish | 1.69 | 0.0007 | Swedish | 1.43 | 0.0 Breakdown of subword fertility rate and unknown portion per language for the 64k vocab: | Language | Subword fertility | Unknown portion | -------- | ------------------ | --------------- | German | 1.31 | 0.0004 | French | 1.16 | 0.0001 | English | 1.17 | 0.0 | Finnish | 1.54 | 0.0007 | Swedish | 1.32 | 0.0 # Final pretraining corpora We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here: | Language | Size | -------- | ---- | German | 28GB | French | 27GB | English | 24GB | Finnish | 27GB | Swedish | 27GB Total size is 130GB. # Pretraining ## Multilingual model We train a multilingual BERT model using the 32k vocab with the official BERT implementation on a v3-32 TPU using the following parameters: ```bash python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \ --output_dir gs://histolectra/bert-base-historic-multilingual-cased \ --bert_config_file ./config.json \ --max_seq_length=512 \ --max_predictions_per_seq=75 \ --do_train=True \ --train_batch_size=128 \ --num_train_steps=3000000 \ --learning_rate=1e-4 \ --save_checkpoints_steps=100000 \ --keep_checkpoint_max=20 \ --use_tpu=True \ --tpu_name=electra-2 \ --num_tpu_cores=32 ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_historic-multilingual.png) # Acknowledgments Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "multilingual", "license": "mit", "widget": [{"text": "and I cannot conceive the reafon why [MASK] hath"}, {"text": "T\u00e4k\u00e4l\u00e4inen sanomalehdist\u00f6 [MASK] erit - t\u00e4in"}, {"text": "Det vore [MASK] h\u00e4ller n\u00f6dv\u00e4ndigt att be"}, {"text": "Comme, \u00e0 cette \u00e9poque [MASK] \u00e9tait celle de la"}, {"text": "In [MASK] an atmosph\u00e4rischen Nahrungsmitteln"}]}
dbmdz/bert-base-historic-multilingual-cased
null
[ "transformers", "pytorch", "jax", "tensorboard", "safetensors", "bert", "fill-mask", "multilingual", "arxiv:2205.15575", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# 🤗 + 📚 dbmdz BERT and ELECTRA models In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources Italian BERT and ELECTRA models 🎉 # Italian BERT The source data for the Italian BERT model consists of a recent Wikipedia dump and various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final training corpus has a size of 13GB and 2,050,057,573 tokens. For sentence splitting, we use NLTK (faster compared to spacy). Our cased and uncased models are training with an initial sequence length of 512 subwords for ~2-3M steps. For the XXL Italian models, we use the same training data from OPUS and extend it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/). Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens. Note: Unfortunately, a wrong vocab size was used when training the XXL models. This explains the mismatch of the "real" vocab size of 31102, compared to the vocab size specified in `config.json`. However, the model is working and all evaluations were done under those circumstances. See [this issue](https://github.com/dbmdz/berts/issues/7) for more information. The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch size of 128. We pretty much following the ELECTRA training procedure as used for [BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra). ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | ---------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt) | `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt) | `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt) | `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt) ## Results For results on downstream tasks like NER or PoS tagging, please refer to [this repository](https://github.com/stefan-it/italian-bertelectra). ## Usage With Transformers >= 2.3 our Italian BERT models can be loaded like: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the (recommended) Italian XXL BERT models, just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-xxl-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the Italian XXL ELECTRA model (discriminator), just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelWithLMHead.from_pretrained(model_name) ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT/ELECTRA models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "it", "license": "mit", "datasets": ["wikipedia"]}
dbmdz/bert-base-italian-cased
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "it", "dataset:wikipedia", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# 🤗 + 📚 dbmdz BERT and ELECTRA models In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources Italian BERT and ELECTRA models 🎉 # Italian BERT The source data for the Italian BERT model consists of a recent Wikipedia dump and various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final training corpus has a size of 13GB and 2,050,057,573 tokens. For sentence splitting, we use NLTK (faster compared to spacy). Our cased and uncased models are training with an initial sequence length of 512 subwords for ~2-3M steps. For the XXL Italian models, we use the same training data from OPUS and extend it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/). Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens. Note: Unfortunately, a wrong vocab size was used when training the XXL models. This explains the mismatch of the "real" vocab size of 31102, compared to the vocab size specified in `config.json`. However, the model is working and all evaluations were done under those circumstances. See [this issue](https://github.com/dbmdz/berts/issues/7) for more information. The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch size of 128. We pretty much following the ELECTRA training procedure as used for [BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra). ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | ---------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt) | `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt) | `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt) | `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt) ## Results For results on downstream tasks like NER or PoS tagging, please refer to [this repository](https://github.com/stefan-it/italian-bertelectra). ## Usage With Transformers >= 2.3 our Italian BERT models can be loaded like: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the (recommended) Italian XXL BERT models, just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-xxl-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the Italian XXL ELECTRA model (discriminator), just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelWithLMHead.from_pretrained(model_name) ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT/ELECTRA models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "it", "license": "mit", "datasets": ["wikipedia"]}
dbmdz/bert-base-italian-uncased
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "it", "dataset:wikipedia", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# 🤗 + 📚 dbmdz BERT and ELECTRA models In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources Italian BERT and ELECTRA models 🎉 # Italian BERT The source data for the Italian BERT model consists of a recent Wikipedia dump and various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final training corpus has a size of 13GB and 2,050,057,573 tokens. For sentence splitting, we use NLTK (faster compared to spacy). Our cased and uncased models are training with an initial sequence length of 512 subwords for ~2-3M steps. For the XXL Italian models, we use the same training data from OPUS and extend it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/). Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens. Note: Unfortunately, a wrong vocab size was used when training the XXL models. This explains the mismatch of the "real" vocab size of 31102, compared to the vocab size specified in `config.json`. However, the model is working and all evaluations were done under those circumstances. See [this issue](https://github.com/dbmdz/berts/issues/7) for more information. The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch size of 128. We pretty much following the ELECTRA training procedure as used for [BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra). ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | ---------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt) | `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt) | `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt) | `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt) ## Results For results on downstream tasks like NER or PoS tagging, please refer to [this repository](https://github.com/stefan-it/italian-bertelectra). ## Usage With Transformers >= 2.3 our Italian BERT models can be loaded like: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the (recommended) Italian XXL BERT models, just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-xxl-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the Italian XXL ELECTRA model (discriminator), just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelWithLMHead.from_pretrained(model_name) ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT/ELECTRA models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "it", "license": "mit", "datasets": ["wikipedia"]}
dbmdz/bert-base-italian-xxl-cased
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "it", "dataset:wikipedia", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# 🤗 + 📚 dbmdz BERT and ELECTRA models In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources Italian BERT and ELECTRA models 🎉 # Italian BERT The source data for the Italian BERT model consists of a recent Wikipedia dump and various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final training corpus has a size of 13GB and 2,050,057,573 tokens. For sentence splitting, we use NLTK (faster compared to spacy). Our cased and uncased models are training with an initial sequence length of 512 subwords for ~2-3M steps. For the XXL Italian models, we use the same training data from OPUS and extend it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/). Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens. Note: Unfortunately, a wrong vocab size was used when training the XXL models. This explains the mismatch of the "real" vocab size of 31102, compared to the vocab size specified in `config.json`. However, the model is working and all evaluations were done under those circumstances. See [this issue](https://github.com/dbmdz/berts/issues/7) for more information. The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch size of 128. We pretty much following the ELECTRA training procedure as used for [BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra). ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | ---------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt) | `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt) | `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt) | `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt) ## Results For results on downstream tasks like NER or PoS tagging, please refer to [this repository](https://github.com/stefan-it/italian-bertelectra). ## Usage With Transformers >= 2.3 our Italian BERT models can be loaded like: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the (recommended) Italian XXL BERT models, just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-xxl-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the Italian XXL ELECTRA model (discriminator), just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelWithLMHead.from_pretrained(model_name) ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT/ELECTRA models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "it", "license": "mit", "datasets": ["wikipedia"]}
dbmdz/bert-base-italian-xxl-uncased
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "it", "dataset:wikipedia", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
{}
dbmdz/bert-base-multilingual-cased-finetuned-conll03-dutch
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
{}
dbmdz/bert-base-multilingual-cased-finetuned-conll03-spanish
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# Historic Language Models (HLMs) ## Languages Our Historic Language Models Zoo contains support for the following languages - incl. their training data source: | Language | Training data | Size | -------- | ------------- | ---- | German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered) | French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered) | English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered) | Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB | Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB ## Models At the moment, the following models are available on the model hub: | Model identifier | Model Hub link | --------------------------------------------- | -------------------------------------------------------------------------- | `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) | `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased) | `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased) | `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased) # Corpora Stats ## German Europeana Corpus We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size and use less-noisier data: | OCR confidence | Size | -------------- | ---- | **0.60** | 28GB | 0.65 | 18GB | 0.70 | 13GB For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution: ![German Europeana Corpus Stats](stats/figures/german_europeana_corpus_stats.png) ## French Europeana Corpus Like German, we use different ocr confidence thresholds: | OCR confidence | Size | -------------- | ---- | 0.60 | 31GB | 0.65 | 27GB | **0.70** | 27GB | 0.75 | 23GB | 0.80 | 11GB For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution: ![French Europeana Corpus Stats](stats/figures/french_europeana_corpus_stats.png) ## British Library Corpus Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering: | Years | Size | ----------------- | ---- | ALL | 24GB | >= 1800 && < 1900 | 24GB We use the year filtered variant. The following plot shows a tokens per year distribution: ![British Library Corpus Stats](stats/figures/bl_corpus_stats.png) ## Finnish Europeana Corpus | OCR confidence | Size | -------------- | ---- | 0.60 | 1.2GB The following plot shows a tokens per year distribution: ![Finnish Europeana Corpus Stats](stats/figures/finnish_europeana_corpus_stats.png) ## Swedish Europeana Corpus | OCR confidence | Size | -------------- | ---- | 0.60 | 1.1GB The following plot shows a tokens per year distribution: ![Swedish Europeana Corpus Stats](stats/figures/swedish_europeana_corpus_stats.png) ## All Corpora The following plot shows a tokens per year distribution of the complete training corpus: ![All Corpora Stats](stats/figures/all_corpus_stats.png) # Multilingual Vocab generation For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB. The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs: | Language | Size | -------- | ---- | German | 10GB | French | 10GB | English | 10GB | Finnish | 9.5GB | Swedish | 9.7GB We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora: | Language | NER corpora | -------- | ------------------ | German | CLEF-HIPE, NewsEye | French | CLEF-HIPE, NewsEye | English | CLEF-HIPE | Finnish | NewsEye | Swedish | NewsEye Breakdown of subword fertility rate and unknown portion per language for the 32k vocab: | Language | Subword fertility | Unknown portion | -------- | ------------------ | --------------- | German | 1.43 | 0.0004 | French | 1.25 | 0.0001 | English | 1.25 | 0.0 | Finnish | 1.69 | 0.0007 | Swedish | 1.43 | 0.0 Breakdown of subword fertility rate and unknown portion per language for the 64k vocab: | Language | Subword fertility | Unknown portion | -------- | ------------------ | --------------- | German | 1.31 | 0.0004 | French | 1.16 | 0.0001 | English | 1.17 | 0.0 | Finnish | 1.54 | 0.0007 | Swedish | 1.32 | 0.0 # Final pretraining corpora We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here: | Language | Size | -------- | ---- | German | 28GB | French | 27GB | English | 24GB | Finnish | 27GB | Swedish | 27GB Total size is 130GB. # Pretraining ## Multilingual model We train a multilingual BERT model using the 32k vocab with the official BERT implementation on a v3-32 TPU using the following parameters: ```bash python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \ --output_dir gs://histolectra/bert-base-historic-multilingual-cased \ --bert_config_file ./config.json \ --max_seq_length=512 \ --max_predictions_per_seq=75 \ --do_train=True \ --train_batch_size=128 \ --num_train_steps=3000000 \ --learning_rate=1e-4 \ --save_checkpoints_steps=100000 \ --keep_checkpoint_max=20 \ --use_tpu=True \ --tpu_name=electra-2 \ --num_tpu_cores=32 ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_historic-multilingual.png) ## English model The English BERT model - with texts from British Library corpus - was trained with the Hugging Face JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command: ```bash python3 run_mlm_flax.py --model_type bert \ --config_name /mnt/datasets/bert-base-historic-english-cased/ \ --tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \ --train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \ --validation_file /mnt/datasets/bl-corpus/english_validation.txt \ --max_seq_length 512 \ --per_device_train_batch_size 16 \ --learning_rate 1e-4 \ --num_train_epochs 10 \ --preprocessing_num_workers 96 \ --output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \ --save_steps 2500 \ --eval_steps 2500 \ --warmup_steps 10000 \ --line_by_line \ --pad_to_max_length ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_historic_english.png) ## Finnish model The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command: ```bash python3 run_mlm_flax.py --model_type bert \ --config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \ --tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \ --train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \ --validation_file /mnt/datasets/hlms/finnish_validation.txt \ --max_seq_length 512 \ --per_device_train_batch_size 16 \ --learning_rate 1e-4 \ --num_train_epochs 40 \ --preprocessing_num_workers 96 \ --output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \ --save_steps 2500 \ --eval_steps 2500 \ --warmup_steps 10000 \ --line_by_line \ --pad_to_max_length ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_finnish_europeana.png) ## Swedish model The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command: ```bash python3 run_mlm_flax.py --model_type bert \ --config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \ --tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \ --train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \ --validation_file /mnt/datasets/hlms/swedish_validation.txt \ --max_seq_length 512 \ --per_device_train_batch_size 16 \ --learning_rate 1e-4 \ --num_train_epochs 40 \ --preprocessing_num_workers 96 \ --output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \ --save_steps 2500 \ --eval_steps 2500 \ --warmup_steps 10000 \ --line_by_line \ --pad_to_max_length ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_swedish_europeana.png) # Acknowledgments Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "swedish", "license": "mit", "widget": [{"text": "Det vore [MASK] h\u00e4ller n\u00f6dv\u00e4ndigt att be"}]}
dbmdz/bert-base-swedish-europeana-cased
null
[ "transformers", "pytorch", "jax", "tensorboard", "bert", "fill-mask", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# 🤗 + 📚 dbmdz Turkish BERT model In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources a cased model for Turkish 🎉 # 🇹🇷 BERTurk BERTurk is a community-driven cased BERT model for Turkish. Some datasets used for pretraining and evaluation are contributed from the awesome Turkish NLP community, as well as the decision for the model name: BERTurk. ## Stats The current version of the model is trained on a filtered and sentence segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/), a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/). The final training corpus has a size of 35GB and 44,04,976,662 tokens. Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model on a TPU v3-8 for 2M steps. For this model we use a vocab size of 128k. ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | ------------------------------------ | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-turkish-128k-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-cased/vocab.txt) ## Usage With Transformers >= 2.3 our BERTurk cased model can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-turkish-128k-cased") model = AutoModel.from_pretrained("dbmdz/bert-base-turkish-128k-cased") ``` ## Results For results on PoS tagging or NER tasks, please refer to [this repository](https://github.com/stefan-it/turkish-bert). # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "tr", "license": "mit"}
dbmdz/bert-base-turkish-128k-cased
null
[ "transformers", "pytorch", "tf", "jax", "bert", "tr", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# 🤗 + 📚 dbmdz Turkish BERT model In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources an uncased model for Turkish 🎉 # 🇹🇷 BERTurk BERTurk is a community-driven uncased BERT model for Turkish. Some datasets used for pretraining and evaluation are contributed from the awesome Turkish NLP community, as well as the decision for the model name: BERTurk. ## Stats The current version of the model is trained on a filtered and sentence segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/), a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/). The final training corpus has a size of 35GB and 44,04,976,662 tokens. Thanks to Google's TensorFlow Research Cloud (TFRC) we could train an uncased model on a TPU v3-8 for 2M steps. For this model we use a vocab size of 128k. ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | -------------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-turkish-128k-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-uncased/vocab.txt) ## Usage With Transformers >= 2.3 our BERTurk uncased model can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-turkish-128k-uncased") model = AutoModel.from_pretrained("dbmdz/bert-base-turkish-128k-uncased") ``` ## Results For results on PoS tagging or NER tasks, please refer to [this repository](https://github.com/stefan-it/turkish-bert). # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "tr", "license": "mit"}
dbmdz/bert-base-turkish-128k-uncased
null
[ "transformers", "pytorch", "tf", "jax", "bert", "tr", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# 🤗 + 📚 dbmdz Turkish BERT model In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources a cased model for Turkish 🎉 # 🇹🇷 BERTurk BERTurk is a community-driven cased BERT model for Turkish. Some datasets used for pretraining and evaluation are contributed from the awesome Turkish NLP community, as well as the decision for the model name: BERTurk. ## Stats The current version of the model is trained on a filtered and sentence segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/), a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/). The final training corpus has a size of 35GB and 44,04,976,662 tokens. Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model on a TPU v3-8 for 2M steps. ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | --------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-turkish-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-cased/vocab.txt) ## Usage With Transformers >= 2.3 our BERTurk cased model can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-turkish-cased") model = AutoModel.from_pretrained("dbmdz/bert-base-turkish-cased") ``` ## Results For results on PoS tagging or NER tasks, please refer to [this repository](https://github.com/stefan-it/turkish-bert). # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "tr", "license": "mit"}
dbmdz/bert-base-turkish-cased
null
[ "transformers", "pytorch", "tf", "jax", "bert", "tr", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# 🤗 + 📚 dbmdz Turkish BERT model In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources an uncased model for Turkish 🎉 # 🇹🇷 BERTurk BERTurk is a community-driven uncased BERT model for Turkish. Some datasets used for pretraining and evaluation are contributed from the awesome Turkish NLP community, as well as the decision for the model name: BERTurk. ## Stats The current version of the model is trained on a filtered and sentence segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/), a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/). The final training corpus has a size of 35GB and 44,04,976,662 tokens. Thanks to Google's TensorFlow Research Cloud (TFRC) we could train an uncased model on a TPU v3-8 for 2M steps. ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | --------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-turkish-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-uncased/vocab.txt) ## Usage With Transformers >= 2.3 our BERTurk uncased model can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-turkish-uncased") model = AutoModel.from_pretrained("dbmdz/bert-base-turkish-uncased") ``` ## Results For results on PoS tagging or NER tasks, please refer to [this repository](https://github.com/stefan-it/turkish-bert). # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "tr", "license": "mit"}
dbmdz/bert-base-turkish-uncased
null
[ "transformers", "pytorch", "tf", "jax", "bert", "tr", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
{}
dbmdz/bert-large-cased-finetuned-conll03-english
null
[ "transformers", "pytorch", "tf", "jax", "rust", "safetensors", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# Historic Language Models (HLMs) ## Languages Our Historic Language Models Zoo contains support for the following languages - incl. their training data source: | Language | Training data | Size | -------- | ------------- | ---- | German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered) | French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered) | English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered) | Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB | Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB ## Models At the moment, the following models are available on the model hub: | Model identifier | Model Hub link | --------------------------------------------- | -------------------------------------------------------------------------- | `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) | `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased) | `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased) | `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased) We also released smaller models for the multilingual model: | Model identifier | Model Hub link | ----------------------------------------------- | --------------------------------------------------------------------------- | `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased) | `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased) | `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased) | `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) **Notice**: We have released language models for Historic German and French trained on more noisier data earlier - see [this repo](https://github.com/stefan-it/europeana-bert) for more information: | Model identifier | Model Hub link | --------------------------------------------- | -------------------------------------------------------------------------- | `dbmdz/bert-base-german-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-german-europeana-cased) | `dbmdz/bert-base-french-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-french-europeana-cased) # Corpora Stats ## German Europeana Corpus We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size and use less-noisier data: | OCR confidence | Size | -------------- | ---- | **0.60** | 28GB | 0.65 | 18GB | 0.70 | 13GB For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution: ![German Europeana Corpus Stats](stats/figures/german_europeana_corpus_stats.png) ## French Europeana Corpus Like German, we use different ocr confidence thresholds: | OCR confidence | Size | -------------- | ---- | 0.60 | 31GB | 0.65 | 27GB | **0.70** | 27GB | 0.75 | 23GB | 0.80 | 11GB For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution: ![French Europeana Corpus Stats](stats/figures/french_europeana_corpus_stats.png) ## British Library Corpus Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering: | Years | Size | ----------------- | ---- | ALL | 24GB | >= 1800 && < 1900 | 24GB We use the year filtered variant. The following plot shows a tokens per year distribution: ![British Library Corpus Stats](stats/figures/bl_corpus_stats.png) ## Finnish Europeana Corpus | OCR confidence | Size | -------------- | ---- | 0.60 | 1.2GB The following plot shows a tokens per year distribution: ![Finnish Europeana Corpus Stats](stats/figures/finnish_europeana_corpus_stats.png) ## Swedish Europeana Corpus | OCR confidence | Size | -------------- | ---- | 0.60 | 1.1GB The following plot shows a tokens per year distribution: ![Swedish Europeana Corpus Stats](stats/figures/swedish_europeana_corpus_stats.png) ## All Corpora The following plot shows a tokens per year distribution of the complete training corpus: ![All Corpora Stats](stats/figures/all_corpus_stats.png) # Multilingual Vocab generation For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB. The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs: | Language | Size | -------- | ---- | German | 10GB | French | 10GB | English | 10GB | Finnish | 9.5GB | Swedish | 9.7GB We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora: | Language | NER corpora | -------- | ------------------ | German | CLEF-HIPE, NewsEye | French | CLEF-HIPE, NewsEye | English | CLEF-HIPE | Finnish | NewsEye | Swedish | NewsEye Breakdown of subword fertility rate and unknown portion per language for the 32k vocab: | Language | Subword fertility | Unknown portion | -------- | ------------------ | --------------- | German | 1.43 | 0.0004 | French | 1.25 | 0.0001 | English | 1.25 | 0.0 | Finnish | 1.69 | 0.0007 | Swedish | 1.43 | 0.0 Breakdown of subword fertility rate and unknown portion per language for the 64k vocab: | Language | Subword fertility | Unknown portion | -------- | ------------------ | --------------- | German | 1.31 | 0.0004 | French | 1.16 | 0.0001 | English | 1.17 | 0.0 | Finnish | 1.54 | 0.0007 | Swedish | 1.32 | 0.0 # Final pretraining corpora We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here: | Language | Size | -------- | ---- | German | 28GB | French | 27GB | English | 24GB | Finnish | 27GB | Swedish | 27GB Total size is 130GB. # Smaller multilingual models Inspired by the ["Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"](https://arxiv.org/abs/1908.08962) paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs: | Model (Layer / Hidden size) | Parameters | Pre-Training time | --------------------------- | ----------: | ----------------------: | hmBERT Tiny ( 2/128) | 4.58M | 4.3 sec / 1,000 steps | hmBERT Mini ( 4/256) | 11.55M | 10.5 sec / 1,000 steps | hmBERT Small ( 4/512) | 29.52M | 20.7 sec / 1,000 steps | hmBERT Medium ( 8/512) | 42.13M | 35.0 sec / 1,000 steps | hmBERT Base (12/768) | 110.62M | 80.0 sec / 1,000 steps We then perform downstream evaluations on the multilingual [NewsEye](https://zenodo.org/record/4573313#.Ya3oVr-ZNzU) dataset: ![NewsEye hmBERT Evaluation](stats/figures/newseye-hmbert-evaluation.png) # Pretraining ## Multilingual model - hmBERT Base We train a multilingual BERT model using the 32k vocab with the official BERT implementation on a v3-32 TPU using the following parameters: ```bash python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \ --output_dir gs://histolectra/bert-base-historic-multilingual-cased \ --bert_config_file ./config.json \ --max_seq_length=512 \ --max_predictions_per_seq=75 \ --do_train=True \ --train_batch_size=128 \ --num_train_steps=3000000 \ --learning_rate=1e-4 \ --save_checkpoints_steps=100000 \ --keep_checkpoint_max=20 \ --use_tpu=True \ --tpu_name=electra-2 \ --num_tpu_cores=32 ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_historic-multilingual.png) ## Smaller multilingual models We use the same parameters as used for training the base model. ### hmBERT Tiny The following plot shows the pretraining loss curve for the tiny model: ![Training loss curve](stats/figures/pretraining_loss_hmbert-tiny.png) ### hmBERT Mini The following plot shows the pretraining loss curve for the mini model: ![Training loss curve](stats/figures/pretraining_loss_hmbert-mini.png) ### hmBERT Small The following plot shows the pretraining loss curve for the small model: ![Training loss curve](stats/figures/pretraining_loss_hmbert-small.png) ### hmBERT Medium The following plot shows the pretraining loss curve for the medium model: ![Training loss curve](stats/figures/pretraining_loss_hmbert-medium.png) ## English model The English BERT model - with texts from British Library corpus - was trained with the Hugging Face JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command: ```bash python3 run_mlm_flax.py --model_type bert \ --config_name /mnt/datasets/bert-base-historic-english-cased/ \ --tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \ --train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \ --validation_file /mnt/datasets/bl-corpus/english_validation.txt \ --max_seq_length 512 \ --per_device_train_batch_size 16 \ --learning_rate 1e-4 \ --num_train_epochs 10 \ --preprocessing_num_workers 96 \ --output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \ --save_steps 2500 \ --eval_steps 2500 \ --warmup_steps 10000 \ --line_by_line \ --pad_to_max_length ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_historic_english.png) ## Finnish model The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command: ```bash python3 run_mlm_flax.py --model_type bert \ --config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \ --tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \ --train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \ --validation_file /mnt/datasets/hlms/finnish_validation.txt \ --max_seq_length 512 \ --per_device_train_batch_size 16 \ --learning_rate 1e-4 \ --num_train_epochs 40 \ --preprocessing_num_workers 96 \ --output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \ --save_steps 2500 \ --eval_steps 2500 \ --warmup_steps 10000 \ --line_by_line \ --pad_to_max_length ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_finnish_europeana.png) ## Swedish model The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command: ```bash python3 run_mlm_flax.py --model_type bert \ --config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \ --tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \ --train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \ --validation_file /mnt/datasets/hlms/swedish_validation.txt \ --max_seq_length 512 \ --per_device_train_batch_size 16 \ --learning_rate 1e-4 \ --num_train_epochs 40 \ --preprocessing_num_workers 96 \ --output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \ --save_steps 2500 \ --eval_steps 2500 \ --warmup_steps 10000 \ --line_by_line \ --pad_to_max_length ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_swedish_europeana.png) # Acknowledgments Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "multilingual", "license": "mit", "widget": [{"text": "and I cannot conceive the reafon why [MASK] hath"}, {"text": "T\u00e4k\u00e4l\u00e4inen sanomalehdist\u00f6 [MASK] erit - t\u00e4in"}, {"text": "Det vore [MASK] h\u00e4ller n\u00f6dv\u00e4ndigt att be"}, {"text": "Comme, \u00e0 cette \u00e9poque [MASK] \u00e9tait celle de la"}, {"text": "In [MASK] an atmosph\u00e4rischen Nahrungsmitteln"}]}
dbmdz/bert-medium-historic-multilingual-cased
null
[ "transformers", "pytorch", "tf", "tensorboard", "safetensors", "bert", "fill-mask", "multilingual", "arxiv:1908.08962", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# Historic Language Models (HLMs) ## Languages Our Historic Language Models Zoo contains support for the following languages - incl. their training data source: | Language | Training data | Size | -------- | ------------- | ---- | German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered) | French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered) | English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered) | Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB | Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB ## Models At the moment, the following models are available on the model hub: | Model identifier | Model Hub link | --------------------------------------------- | -------------------------------------------------------------------------- | `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) | `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased) | `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased) | `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased) We also released smaller models for the multilingual model: | Model identifier | Model Hub link | ----------------------------------------------- | --------------------------------------------------------------------------- | `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased) | `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased) | `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased) | `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) **Notice**: We have released language models for Historic German and French trained on more noisier data earlier - see [this repo](https://github.com/stefan-it/europeana-bert) for more information: | Model identifier | Model Hub link | --------------------------------------------- | -------------------------------------------------------------------------- | `dbmdz/bert-base-german-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-german-europeana-cased) | `dbmdz/bert-base-french-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-french-europeana-cased) # Corpora Stats ## German Europeana Corpus We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size and use less-noisier data: | OCR confidence | Size | -------------- | ---- | **0.60** | 28GB | 0.65 | 18GB | 0.70 | 13GB For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution: ![German Europeana Corpus Stats](stats/figures/german_europeana_corpus_stats.png) ## French Europeana Corpus Like German, we use different ocr confidence thresholds: | OCR confidence | Size | -------------- | ---- | 0.60 | 31GB | 0.65 | 27GB | **0.70** | 27GB | 0.75 | 23GB | 0.80 | 11GB For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution: ![French Europeana Corpus Stats](stats/figures/french_europeana_corpus_stats.png) ## British Library Corpus Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering: | Years | Size | ----------------- | ---- | ALL | 24GB | >= 1800 && < 1900 | 24GB We use the year filtered variant. The following plot shows a tokens per year distribution: ![British Library Corpus Stats](stats/figures/bl_corpus_stats.png) ## Finnish Europeana Corpus | OCR confidence | Size | -------------- | ---- | 0.60 | 1.2GB The following plot shows a tokens per year distribution: ![Finnish Europeana Corpus Stats](stats/figures/finnish_europeana_corpus_stats.png) ## Swedish Europeana Corpus | OCR confidence | Size | -------------- | ---- | 0.60 | 1.1GB The following plot shows a tokens per year distribution: ![Swedish Europeana Corpus Stats](stats/figures/swedish_europeana_corpus_stats.png) ## All Corpora The following plot shows a tokens per year distribution of the complete training corpus: ![All Corpora Stats](stats/figures/all_corpus_stats.png) # Multilingual Vocab generation For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB. The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs: | Language | Size | -------- | ---- | German | 10GB | French | 10GB | English | 10GB | Finnish | 9.5GB | Swedish | 9.7GB We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora: | Language | NER corpora | -------- | ------------------ | German | CLEF-HIPE, NewsEye | French | CLEF-HIPE, NewsEye | English | CLEF-HIPE | Finnish | NewsEye | Swedish | NewsEye Breakdown of subword fertility rate and unknown portion per language for the 32k vocab: | Language | Subword fertility | Unknown portion | -------- | ------------------ | --------------- | German | 1.43 | 0.0004 | French | 1.25 | 0.0001 | English | 1.25 | 0.0 | Finnish | 1.69 | 0.0007 | Swedish | 1.43 | 0.0 Breakdown of subword fertility rate and unknown portion per language for the 64k vocab: | Language | Subword fertility | Unknown portion | -------- | ------------------ | --------------- | German | 1.31 | 0.0004 | French | 1.16 | 0.0001 | English | 1.17 | 0.0 | Finnish | 1.54 | 0.0007 | Swedish | 1.32 | 0.0 # Final pretraining corpora We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here: | Language | Size | -------- | ---- | German | 28GB | French | 27GB | English | 24GB | Finnish | 27GB | Swedish | 27GB Total size is 130GB. # Smaller multilingual models Inspired by the ["Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"](https://arxiv.org/abs/1908.08962) paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs: | Model (Layer / Hidden size) | Parameters | Pre-Training time | --------------------------- | ----------: | ----------------------: | hmBERT Tiny ( 2/128) | 4.58M | 4.3 sec / 1,000 steps | hmBERT Mini ( 4/256) | 11.55M | 10.5 sec / 1,000 steps | hmBERT Small ( 4/512) | 29.52M | 20.7 sec / 1,000 steps | hmBERT Medium ( 8/512) | 42.13M | 35.0 sec / 1,000 steps | hmBERT Base (12/768) | 110.62M | 80.0 sec / 1,000 steps We then perform downstream evaluations on the multilingual [NewsEye](https://zenodo.org/record/4573313#.Ya3oVr-ZNzU) dataset: ![NewsEye hmBERT Evaluation](stats/figures/newseye-hmbert-evaluation.png) # Pretraining ## Multilingual model - hmBERT Base We train a multilingual BERT model using the 32k vocab with the official BERT implementation on a v3-32 TPU using the following parameters: ```bash python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \ --output_dir gs://histolectra/bert-base-historic-multilingual-cased \ --bert_config_file ./config.json \ --max_seq_length=512 \ --max_predictions_per_seq=75 \ --do_train=True \ --train_batch_size=128 \ --num_train_steps=3000000 \ --learning_rate=1e-4 \ --save_checkpoints_steps=100000 \ --keep_checkpoint_max=20 \ --use_tpu=True \ --tpu_name=electra-2 \ --num_tpu_cores=32 ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_historic-multilingual.png) ## Smaller multilingual models We use the same parameters as used for training the base model. ### hmBERT Tiny The following plot shows the pretraining loss curve for the tiny model: ![Training loss curve](stats/figures/pretraining_loss_hmbert-tiny.png) ### hmBERT Mini The following plot shows the pretraining loss curve for the mini model: ![Training loss curve](stats/figures/pretraining_loss_hmbert-mini.png) ### hmBERT Small The following plot shows the pretraining loss curve for the small model: ![Training loss curve](stats/figures/pretraining_loss_hmbert-small.png) ### hmBERT Medium The following plot shows the pretraining loss curve for the medium model: ![Training loss curve](stats/figures/pretraining_loss_hmbert-medium.png) ## English model The English BERT model - with texts from British Library corpus - was trained with the Hugging Face JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command: ```bash python3 run_mlm_flax.py --model_type bert \ --config_name /mnt/datasets/bert-base-historic-english-cased/ \ --tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \ --train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \ --validation_file /mnt/datasets/bl-corpus/english_validation.txt \ --max_seq_length 512 \ --per_device_train_batch_size 16 \ --learning_rate 1e-4 \ --num_train_epochs 10 \ --preprocessing_num_workers 96 \ --output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \ --save_steps 2500 \ --eval_steps 2500 \ --warmup_steps 10000 \ --line_by_line \ --pad_to_max_length ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_historic_english.png) ## Finnish model The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command: ```bash python3 run_mlm_flax.py --model_type bert \ --config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \ --tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \ --train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \ --validation_file /mnt/datasets/hlms/finnish_validation.txt \ --max_seq_length 512 \ --per_device_train_batch_size 16 \ --learning_rate 1e-4 \ --num_train_epochs 40 \ --preprocessing_num_workers 96 \ --output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \ --save_steps 2500 \ --eval_steps 2500 \ --warmup_steps 10000 \ --line_by_line \ --pad_to_max_length ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_finnish_europeana.png) ## Swedish model The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command: ```bash python3 run_mlm_flax.py --model_type bert \ --config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \ --tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \ --train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \ --validation_file /mnt/datasets/hlms/swedish_validation.txt \ --max_seq_length 512 \ --per_device_train_batch_size 16 \ --learning_rate 1e-4 \ --num_train_epochs 40 \ --preprocessing_num_workers 96 \ --output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \ --save_steps 2500 \ --eval_steps 2500 \ --warmup_steps 10000 \ --line_by_line \ --pad_to_max_length ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_swedish_europeana.png) # Acknowledgments Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "multilingual", "license": "mit", "widget": [{"text": "and I cannot conceive the reafon why [MASK] hath"}, {"text": "T\u00e4k\u00e4l\u00e4inen sanomalehdist\u00f6 [MASK] erit - t\u00e4in"}, {"text": "Det vore [MASK] h\u00e4ller n\u00f6dv\u00e4ndigt att be"}, {"text": "Comme, \u00e0 cette \u00e9poque [MASK] \u00e9tait celle de la"}, {"text": "In [MASK] an atmosph\u00e4rischen Nahrungsmitteln"}]}
dbmdz/bert-mini-historic-multilingual-cased
null
[ "transformers", "pytorch", "tf", "tensorboard", "safetensors", "bert", "fill-mask", "multilingual", "arxiv:1908.08962", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# Historic Language Models (HLMs) ## Languages Our Historic Language Models Zoo contains support for the following languages - incl. their training data source: | Language | Training data | Size | -------- | ------------- | ---- | German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered) | French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered) | English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered) | Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB | Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB ## Models At the moment, the following models are available on the model hub: | Model identifier | Model Hub link | --------------------------------------------- | -------------------------------------------------------------------------- | `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) | `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased) | `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased) | `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased) We also released smaller models for the multilingual model: | Model identifier | Model Hub link | ----------------------------------------------- | --------------------------------------------------------------------------- | `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased) | `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased) | `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased) | `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) **Notice**: We have released language models for Historic German and French trained on more noisier data earlier - see [this repo](https://github.com/stefan-it/europeana-bert) for more information: | Model identifier | Model Hub link | --------------------------------------------- | -------------------------------------------------------------------------- | `dbmdz/bert-base-german-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-german-europeana-cased) | `dbmdz/bert-base-french-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-french-europeana-cased) # Corpora Stats ## German Europeana Corpus We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size and use less-noisier data: | OCR confidence | Size | -------------- | ---- | **0.60** | 28GB | 0.65 | 18GB | 0.70 | 13GB For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution: ![German Europeana Corpus Stats](stats/figures/german_europeana_corpus_stats.png) ## French Europeana Corpus Like German, we use different ocr confidence thresholds: | OCR confidence | Size | -------------- | ---- | 0.60 | 31GB | 0.65 | 27GB | **0.70** | 27GB | 0.75 | 23GB | 0.80 | 11GB For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution: ![French Europeana Corpus Stats](stats/figures/french_europeana_corpus_stats.png) ## British Library Corpus Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering: | Years | Size | ----------------- | ---- | ALL | 24GB | >= 1800 && < 1900 | 24GB We use the year filtered variant. The following plot shows a tokens per year distribution: ![British Library Corpus Stats](stats/figures/bl_corpus_stats.png) ## Finnish Europeana Corpus | OCR confidence | Size | -------------- | ---- | 0.60 | 1.2GB The following plot shows a tokens per year distribution: ![Finnish Europeana Corpus Stats](stats/figures/finnish_europeana_corpus_stats.png) ## Swedish Europeana Corpus | OCR confidence | Size | -------------- | ---- | 0.60 | 1.1GB The following plot shows a tokens per year distribution: ![Swedish Europeana Corpus Stats](stats/figures/swedish_europeana_corpus_stats.png) ## All Corpora The following plot shows a tokens per year distribution of the complete training corpus: ![All Corpora Stats](stats/figures/all_corpus_stats.png) # Multilingual Vocab generation For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB. The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs: | Language | Size | -------- | ---- | German | 10GB | French | 10GB | English | 10GB | Finnish | 9.5GB | Swedish | 9.7GB We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora: | Language | NER corpora | -------- | ------------------ | German | CLEF-HIPE, NewsEye | French | CLEF-HIPE, NewsEye | English | CLEF-HIPE | Finnish | NewsEye | Swedish | NewsEye Breakdown of subword fertility rate and unknown portion per language for the 32k vocab: | Language | Subword fertility | Unknown portion | -------- | ------------------ | --------------- | German | 1.43 | 0.0004 | French | 1.25 | 0.0001 | English | 1.25 | 0.0 | Finnish | 1.69 | 0.0007 | Swedish | 1.43 | 0.0 Breakdown of subword fertility rate and unknown portion per language for the 64k vocab: | Language | Subword fertility | Unknown portion | -------- | ------------------ | --------------- | German | 1.31 | 0.0004 | French | 1.16 | 0.0001 | English | 1.17 | 0.0 | Finnish | 1.54 | 0.0007 | Swedish | 1.32 | 0.0 # Final pretraining corpora We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here: | Language | Size | -------- | ---- | German | 28GB | French | 27GB | English | 24GB | Finnish | 27GB | Swedish | 27GB Total size is 130GB. # Smaller multilingual models Inspired by the ["Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"](https://arxiv.org/abs/1908.08962) paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs: | Model (Layer / Hidden size) | Parameters | Pre-Training time | --------------------------- | ----------: | ----------------------: | hmBERT Tiny ( 2/128) | 4.58M | 4.3 sec / 1,000 steps | hmBERT Mini ( 4/256) | 11.55M | 10.5 sec / 1,000 steps | hmBERT Small ( 4/512) | 29.52M | 20.7 sec / 1,000 steps | hmBERT Medium ( 8/512) | 42.13M | 35.0 sec / 1,000 steps | hmBERT Base (12/768) | 110.62M | 80.0 sec / 1,000 steps We then perform downstream evaluations on the multilingual [NewsEye](https://zenodo.org/record/4573313#.Ya3oVr-ZNzU) dataset: ![NewsEye hmBERT Evaluation](stats/figures/newseye-hmbert-evaluation.png) # Pretraining ## Multilingual model - hmBERT Base We train a multilingual BERT model using the 32k vocab with the official BERT implementation on a v3-32 TPU using the following parameters: ```bash python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \ --output_dir gs://histolectra/bert-base-historic-multilingual-cased \ --bert_config_file ./config.json \ --max_seq_length=512 \ --max_predictions_per_seq=75 \ --do_train=True \ --train_batch_size=128 \ --num_train_steps=3000000 \ --learning_rate=1e-4 \ --save_checkpoints_steps=100000 \ --keep_checkpoint_max=20 \ --use_tpu=True \ --tpu_name=electra-2 \ --num_tpu_cores=32 ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_historic-multilingual.png) ## Smaller multilingual models We use the same parameters as used for training the base model. ### hmBERT Tiny The following plot shows the pretraining loss curve for the tiny model: ![Training loss curve](stats/figures/pretraining_loss_hmbert-tiny.png) ### hmBERT Mini The following plot shows the pretraining loss curve for the mini model: ![Training loss curve](stats/figures/pretraining_loss_hmbert-mini.png) ### hmBERT Small The following plot shows the pretraining loss curve for the small model: ![Training loss curve](stats/figures/pretraining_loss_hmbert-small.png) ### hmBERT Medium The following plot shows the pretraining loss curve for the medium model: ![Training loss curve](stats/figures/pretraining_loss_hmbert-medium.png) ## English model The English BERT model - with texts from British Library corpus - was trained with the Hugging Face JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command: ```bash python3 run_mlm_flax.py --model_type bert \ --config_name /mnt/datasets/bert-base-historic-english-cased/ \ --tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \ --train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \ --validation_file /mnt/datasets/bl-corpus/english_validation.txt \ --max_seq_length 512 \ --per_device_train_batch_size 16 \ --learning_rate 1e-4 \ --num_train_epochs 10 \ --preprocessing_num_workers 96 \ --output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \ --save_steps 2500 \ --eval_steps 2500 \ --warmup_steps 10000 \ --line_by_line \ --pad_to_max_length ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_historic_english.png) ## Finnish model The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command: ```bash python3 run_mlm_flax.py --model_type bert \ --config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \ --tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \ --train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \ --validation_file /mnt/datasets/hlms/finnish_validation.txt \ --max_seq_length 512 \ --per_device_train_batch_size 16 \ --learning_rate 1e-4 \ --num_train_epochs 40 \ --preprocessing_num_workers 96 \ --output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \ --save_steps 2500 \ --eval_steps 2500 \ --warmup_steps 10000 \ --line_by_line \ --pad_to_max_length ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_finnish_europeana.png) ## Swedish model The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command: ```bash python3 run_mlm_flax.py --model_type bert \ --config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \ --tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \ --train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \ --validation_file /mnt/datasets/hlms/swedish_validation.txt \ --max_seq_length 512 \ --per_device_train_batch_size 16 \ --learning_rate 1e-4 \ --num_train_epochs 40 \ --preprocessing_num_workers 96 \ --output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \ --save_steps 2500 \ --eval_steps 2500 \ --warmup_steps 10000 \ --line_by_line \ --pad_to_max_length ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_swedish_europeana.png) # Acknowledgments Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "multilingual", "license": "mit", "widget": [{"text": "and I cannot conceive the reafon why [MASK] hath"}, {"text": "T\u00e4k\u00e4l\u00e4inen sanomalehdist\u00f6 [MASK] erit - t\u00e4in"}, {"text": "Det vore [MASK] h\u00e4ller n\u00f6dv\u00e4ndigt att be"}, {"text": "Comme, \u00e0 cette \u00e9poque [MASK] \u00e9tait celle de la"}, {"text": "In [MASK] an atmosph\u00e4rischen Nahrungsmitteln"}]}
dbmdz/bert-small-historic-multilingual-cased
null
[ "transformers", "pytorch", "tf", "tensorboard", "safetensors", "bert", "fill-mask", "multilingual", "arxiv:1908.08962", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# Historic Language Models (HLMs) ## Languages Our Historic Language Models Zoo contains support for the following languages - incl. their training data source: | Language | Training data | Size | -------- | ------------- | ---- | German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered) | French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered) | English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered) | Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB | Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB ## Models At the moment, the following models are available on the model hub: | Model identifier | Model Hub link | --------------------------------------------- | -------------------------------------------------------------------------- | `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) | `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased) | `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased) | `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased) We also released smaller models for the multilingual model: | Model identifier | Model Hub link | ----------------------------------------------- | --------------------------------------------------------------------------- | `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased) | `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased) | `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased) | `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) **Notice**: We have released language models for Historic German and French trained on more noisier data earlier - see [this repo](https://github.com/stefan-it/europeana-bert) for more information: | Model identifier | Model Hub link | --------------------------------------------- | -------------------------------------------------------------------------- | `dbmdz/bert-base-german-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-german-europeana-cased) | `dbmdz/bert-base-french-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-french-europeana-cased) # Corpora Stats ## German Europeana Corpus We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size and use less-noisier data: | OCR confidence | Size | -------------- | ---- | **0.60** | 28GB | 0.65 | 18GB | 0.70 | 13GB For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution: ![German Europeana Corpus Stats](stats/figures/german_europeana_corpus_stats.png) ## French Europeana Corpus Like German, we use different ocr confidence thresholds: | OCR confidence | Size | -------------- | ---- | 0.60 | 31GB | 0.65 | 27GB | **0.70** | 27GB | 0.75 | 23GB | 0.80 | 11GB For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution: ![French Europeana Corpus Stats](stats/figures/french_europeana_corpus_stats.png) ## British Library Corpus Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering: | Years | Size | ----------------- | ---- | ALL | 24GB | >= 1800 && < 1900 | 24GB We use the year filtered variant. The following plot shows a tokens per year distribution: ![British Library Corpus Stats](stats/figures/bl_corpus_stats.png) ## Finnish Europeana Corpus | OCR confidence | Size | -------------- | ---- | 0.60 | 1.2GB The following plot shows a tokens per year distribution: ![Finnish Europeana Corpus Stats](stats/figures/finnish_europeana_corpus_stats.png) ## Swedish Europeana Corpus | OCR confidence | Size | -------------- | ---- | 0.60 | 1.1GB The following plot shows a tokens per year distribution: ![Swedish Europeana Corpus Stats](stats/figures/swedish_europeana_corpus_stats.png) ## All Corpora The following plot shows a tokens per year distribution of the complete training corpus: ![All Corpora Stats](stats/figures/all_corpus_stats.png) # Multilingual Vocab generation For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB. The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs: | Language | Size | -------- | ---- | German | 10GB | French | 10GB | English | 10GB | Finnish | 9.5GB | Swedish | 9.7GB We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora: | Language | NER corpora | -------- | ------------------ | German | CLEF-HIPE, NewsEye | French | CLEF-HIPE, NewsEye | English | CLEF-HIPE | Finnish | NewsEye | Swedish | NewsEye Breakdown of subword fertility rate and unknown portion per language for the 32k vocab: | Language | Subword fertility | Unknown portion | -------- | ------------------ | --------------- | German | 1.43 | 0.0004 | French | 1.25 | 0.0001 | English | 1.25 | 0.0 | Finnish | 1.69 | 0.0007 | Swedish | 1.43 | 0.0 Breakdown of subword fertility rate and unknown portion per language for the 64k vocab: | Language | Subword fertility | Unknown portion | -------- | ------------------ | --------------- | German | 1.31 | 0.0004 | French | 1.16 | 0.0001 | English | 1.17 | 0.0 | Finnish | 1.54 | 0.0007 | Swedish | 1.32 | 0.0 # Final pretraining corpora We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here: | Language | Size | -------- | ---- | German | 28GB | French | 27GB | English | 24GB | Finnish | 27GB | Swedish | 27GB Total size is 130GB. # Smaller multilingual models Inspired by the ["Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"](https://arxiv.org/abs/1908.08962) paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs: | Model (Layer / Hidden size) | Parameters | Pre-Training time | --------------------------- | ----------: | ----------------------: | hmBERT Tiny ( 2/128) | 4.58M | 4.3 sec / 1,000 steps | hmBERT Mini ( 4/256) | 11.55M | 10.5 sec / 1,000 steps | hmBERT Small ( 4/512) | 29.52M | 20.7 sec / 1,000 steps | hmBERT Medium ( 8/512) | 42.13M | 35.0 sec / 1,000 steps | hmBERT Base (12/768) | 110.62M | 80.0 sec / 1,000 steps We then perform downstream evaluations on the multilingual [NewsEye](https://zenodo.org/record/4573313#.Ya3oVr-ZNzU) dataset: ![NewsEye hmBERT Evaluation](stats/figures/newseye-hmbert-evaluation.png) # Pretraining ## Multilingual model - hmBERT Base We train a multilingual BERT model using the 32k vocab with the official BERT implementation on a v3-32 TPU using the following parameters: ```bash python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \ --output_dir gs://histolectra/bert-base-historic-multilingual-cased \ --bert_config_file ./config.json \ --max_seq_length=512 \ --max_predictions_per_seq=75 \ --do_train=True \ --train_batch_size=128 \ --num_train_steps=3000000 \ --learning_rate=1e-4 \ --save_checkpoints_steps=100000 \ --keep_checkpoint_max=20 \ --use_tpu=True \ --tpu_name=electra-2 \ --num_tpu_cores=32 ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_historic-multilingual.png) ## Smaller multilingual models We use the same parameters as used for training the base model. ### hmBERT Tiny The following plot shows the pretraining loss curve for the tiny model: ![Training loss curve](stats/figures/pretraining_loss_hmbert-tiny.png) ### hmBERT Mini The following plot shows the pretraining loss curve for the mini model: ![Training loss curve](stats/figures/pretraining_loss_hmbert-mini.png) ### hmBERT Small The following plot shows the pretraining loss curve for the small model: ![Training loss curve](stats/figures/pretraining_loss_hmbert-small.png) ### hmBERT Medium The following plot shows the pretraining loss curve for the medium model: ![Training loss curve](stats/figures/pretraining_loss_hmbert-medium.png) ## English model The English BERT model - with texts from British Library corpus - was trained with the Hugging Face JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command: ```bash python3 run_mlm_flax.py --model_type bert \ --config_name /mnt/datasets/bert-base-historic-english-cased/ \ --tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \ --train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \ --validation_file /mnt/datasets/bl-corpus/english_validation.txt \ --max_seq_length 512 \ --per_device_train_batch_size 16 \ --learning_rate 1e-4 \ --num_train_epochs 10 \ --preprocessing_num_workers 96 \ --output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \ --save_steps 2500 \ --eval_steps 2500 \ --warmup_steps 10000 \ --line_by_line \ --pad_to_max_length ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_historic_english.png) ## Finnish model The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command: ```bash python3 run_mlm_flax.py --model_type bert \ --config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \ --tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \ --train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \ --validation_file /mnt/datasets/hlms/finnish_validation.txt \ --max_seq_length 512 \ --per_device_train_batch_size 16 \ --learning_rate 1e-4 \ --num_train_epochs 40 \ --preprocessing_num_workers 96 \ --output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \ --save_steps 2500 \ --eval_steps 2500 \ --warmup_steps 10000 \ --line_by_line \ --pad_to_max_length ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_finnish_europeana.png) ## Swedish model The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command: ```bash python3 run_mlm_flax.py --model_type bert \ --config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \ --tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \ --train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \ --validation_file /mnt/datasets/hlms/swedish_validation.txt \ --max_seq_length 512 \ --per_device_train_batch_size 16 \ --learning_rate 1e-4 \ --num_train_epochs 40 \ --preprocessing_num_workers 96 \ --output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \ --save_steps 2500 \ --eval_steps 2500 \ --warmup_steps 10000 \ --line_by_line \ --pad_to_max_length ``` The following plot shows the pretraining loss curve: ![Training loss curve](stats/figures/pretraining_loss_swedish_europeana.png) # Acknowledgments Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "multilingual", "license": "mit", "widget": [{"text": "and I cannot conceive the reafon why [MASK] hath"}, {"text": "T\u00e4k\u00e4l\u00e4inen sanomalehdist\u00f6 [MASK] erit - t\u00e4in"}, {"text": "Det vore [MASK] h\u00e4ller n\u00f6dv\u00e4ndigt att be"}, {"text": "Comme, \u00e0 cette \u00e9poque [MASK] \u00e9tait celle de la"}, {"text": "In [MASK] an atmosph\u00e4rischen Nahrungsmitteln"}]}
dbmdz/bert-tiny-historic-multilingual-cased
null
[ "transformers", "pytorch", "tf", "tensorboard", "safetensors", "bert", "fill-mask", "multilingual", "arxiv:1908.08962", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
# 🤗 + 📚 dbmdz ConvBERT model In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources a German Europeana ConvBERT model 🎉 # German Europeana ConvBERT We use the open source [Europeana newspapers](http://www.europeana-newspapers.eu/) that were provided by *The European Library*. The final training corpus has a size of 51GB and consists of 8,035,986,369 tokens. Detailed information about the data and pretraining steps can be found in [this repository](https://github.com/stefan-it/europeana-bert). ## Results For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert). ## Usage With Transformers >= 4.3 our German Europeana ConvBERT model can be loaded like: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/convbert-base-german-europeana-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` # Huggingface model hub All other German Europeana models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our Europeana BERT, ELECTRA and ConvBERT models just open a new discussion [here](https://github.com/stefan-it/europeana-bert/discussions) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "de", "license": "mit", "tags": ["historic german"]}
dbmdz/convbert-base-german-europeana-cased
null
[ "transformers", "pytorch", "tf", "safetensors", "convbert", "feature-extraction", "historic german", "de", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
# 🤗 + 📚 dbmdz Turkish ConvBERT model In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources a cased ConvBERT model for Turkish 🎉 # 🇹🇷 ConvBERTurk ConvBERTurk is a community-driven cased ConvBERT model for Turkish. In addition to the BERT and ELECTRA based models, we also trained a ConvBERT model. The ConvBERT architecture is presented in the ["ConvBERT: Improving BERT with Span-based Dynamic Convolution"](https://arxiv.org/abs/2008.02496) paper. We follow a different training procedure: instead of using a two-phase approach, that pre-trains the model for 90% with 128 sequence length and 10% with 512 sequence length, we pre-train the model with 512 sequence length for 1M steps on a v3-32 TPU. ## Stats The current version of the model is trained on a filtered and sentence segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/), a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/). The final training corpus has a size of 35GB and 44,04,976,662 tokens. Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model on a TPU v3-32! ## Usage With Transformers >= 4.3 our cased ConvBERT model can be loaded like: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/convbert-base-turkish-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` ## Results For results on PoS tagging, NER and Question Answering downstream tasks, please refer to [this repository](https://github.com/stefan-it/turkish-bert). # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our DBMDZ BERT models in general, just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "tr", "license": "mit"}
dbmdz/convbert-base-turkish-cased
null
[ "transformers", "pytorch", "tf", "safetensors", "convbert", "feature-extraction", "tr", "arxiv:2008.02496", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# 🇹🇷 Turkish ConvBERT model <p align="center"> <img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png"> </p> [![DOI](https://zenodo.org/badge/237817454.svg)](https://zenodo.org/badge/latestdoi/237817454) We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉 Some datasets used for pretraining and evaluation are contributed from the awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk. Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann). # Stats We've trained an (cased) ConvBERT model on the recently released Turkish part of the [multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team. After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting in 31,240,963,926 tokens. We used the original 32k vocab (instead of creating a new one). # mC4 ConvBERT In addition to the ELEC**TR**A base model, we also trained an ConvBERT model on the Turkish part of the mC4 corpus. We use a sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU. # Model usage All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz) using their model name. Example usage with 🤗/Transformers: ```python tokenizer = AutoTokenizer.from_pretrained("dbmdz/convbert-base-turkish-mc4-cased") model = AutoModel.from_pretrained("dbmdz/convbert-base-turkish-mc4-cased") ``` # Citation You can use the following BibTeX entry for citation: ```bibtex @software{stefan_schweter_2020_3770924, author = {Stefan Schweter}, title = {BERTurk - BERT models for Turkish}, month = apr, year = 2020, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.3770924}, url = {https://doi.org/10.5281/zenodo.3770924} } ``` # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the awesome logo! Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️
{"language": "tr", "license": "mit", "datasets": ["allenai/c4"]}
dbmdz/convbert-base-turkish-mc4-cased
null
[ "transformers", "pytorch", "tf", "safetensors", "convbert", "fill-mask", "tr", "dataset:allenai/c4", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# 🇹🇷 Turkish ConvBERT model <p align="center"> <img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png"> </p> [![DOI](https://zenodo.org/badge/237817454.svg)](https://zenodo.org/badge/latestdoi/237817454) We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉 Some datasets used for pretraining and evaluation are contributed from the awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk. Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann). # Stats We've trained an (uncased) ConvBERT model on the recently released Turkish part of the [multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team. After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting in 31,240,963,926 tokens. We used the original 32k vocab (instead of creating a new one). # mC4 ConvBERT In addition to the ELEC**TR**A base model, we also trained an ConvBERT model on the Turkish part of the mC4 corpus. We use a sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU. # Model usage All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz) using their model name. Example usage with 🤗/Transformers: ```python tokenizer = AutoTokenizer.from_pretrained("dbmdz/convbert-base-turkish-mc4-uncased") model = AutoModel.from_pretrained("dbmdz/convbert-base-turkish-mc4-uncased") ``` # Citation You can use the following BibTeX entry for citation: ```bibtex @software{stefan_schweter_2020_3770924, author = {Stefan Schweter}, title = {BERTurk - BERT models for Turkish}, month = apr, year = 2020, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.3770924}, url = {https://doi.org/10.5281/zenodo.3770924} } ``` # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the awesome logo! Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️
{"language": "tr", "license": "mit", "datasets": ["allenai/c4"]}
dbmdz/convbert-base-turkish-mc4-uncased
null
[ "transformers", "pytorch", "tf", "safetensors", "convbert", "fill-mask", "tr", "dataset:allenai/c4", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# 🤗 + 📚 dbmdz DistilBERT model In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources a German Europeana DistilBERT model 🎉 # German Europeana DistilBERT We use the open source [Europeana newspapers](http://www.europeana-newspapers.eu/) that were provided by *The European Library*. The final training corpus has a size of 51GB and consists of 8,035,986,369 tokens. Detailed information about the data and pretraining steps can be found in [this repository](https://github.com/stefan-it/europeana-bert). ## Results For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert). ## Usage With Transformers >= 4.3 our German Europeana DistilBERT model can be loaded like: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/distilbert-base-german-europeana-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` # Huggingface model hub All other German Europeana models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our Europeana BERT, ELECTRA and ConvBERT models just open a new discussion [here](https://github.com/stefan-it/europeana-bert/discussions) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "de", "license": "mit", "tags": ["historic german"]}
dbmdz/distilbert-base-german-europeana-cased
null
[ "transformers", "pytorch", "tf", "distilbert", "historic german", "de", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# 🤗 + 📚 dbmdz Distilled Turkish BERT model In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources a (cased) distilled model for Turkish 🎉 # 🇹🇷 DistilBERTurk DistilBERTurk is a community-driven cased distilled BERT model for Turkish. DistilBERTurk was trained on 7GB of the original training data that was used for training [BERTurk](https://github.com/stefan-it/turkish-bert/tree/master#stats), using the cased version of BERTurk as teacher model. *DistilBERTurk* was trained with the official Hugging Face implementation from [here](https://github.com/huggingface/transformers/tree/master/examples/distillation) for 5 days on 4 RTX 2080 TI. More details about distillation can be found in the ["DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"](https://arxiv.org/abs/1910.01108) paper by Sanh et al. (2019). ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue in the [BERTurk](https://github.com/stefan-it/turkish-bert) repository! | Model | Downloads | --------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/distilbert-base-turkish-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/distilbert-base-turkish-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/distilbert-base-turkish-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/distilbert-base-turkish-cased/vocab.txt) ## Usage With Transformers >= 2.3 our DistilBERTurk model can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/distilbert-base-turkish-cased") model = AutoModel.from_pretrained("dbmdz/distilbert-base-turkish-cased") ``` ## Results For results on PoS tagging or NER tasks, please refer to [this repository](https://github.com/stefan-it/turkish-bert). For PoS tagging, DistilBERTurk outperforms the 24-layer XLM-RoBERTa model. The overall performance difference between DistilBERTurk and the original (teacher) BERTurk model is ~1.18%. # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "tr", "license": "mit"}
dbmdz/distilbert-base-turkish-cased
null
[ "transformers", "pytorch", "tf", "distilbert", "tr", "arxiv:1910.01108", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# 🤗 + 📚 dbmdz ELECTRA models In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources French Europeana ELECTRA models 🎉 # French Europeana ELECTRA We extracted all French texts using the `language` metadata attribute from the Europeana corpus. The resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens. Based on the metadata information, texts from the 18th - 20th century are mainly included in the training corpus. Detailed information about the data and pretraining steps can be found in [this repository](https://github.com/stefan-it/europeana-bert). ## Model weights ELECTRA model weights for PyTorch and TensorFlow are available. * French Europeana ELECTRA (discriminator): `dbmdz/electra-base-french-europeana-cased-discriminator` - [model hub page](https://huggingface.co/dbmdz/electra-base-french-europeana-cased-discriminator/tree/main) * French Europeana ELECTRA (generator): `dbmdz/electra-base-french-europeana-cased-generator` - [model hub page](https://huggingface.co/dbmdz/electra-base-french-europeana-cased-generator/tree/main) ## Results For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert). ## Usage With Transformers >= 2.3 our French Europeana ELECTRA model can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-french-europeana-cased-discriminator") model = AutoModel.from_pretrained("dbmdz/electra-base-french-europeana-cased-discriminator") ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our ELECTRA models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download our models from their S3 storage 🤗
{"language": "fr", "license": "mit", "tags": ["historic french"]}
dbmdz/electra-base-french-europeana-cased-discriminator
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "historic french", "fr", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# 🤗 + 📚 dbmdz ELECTRA models In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources French Europeana ELECTRA models 🎉 # French Europeana ELECTRA We extracted all French texts using the `language` metadata attribute from the Europeana corpus. The resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens. Based on the metadata information, texts from the 18th - 20th century are mainly included in the training corpus. Detailed information about the data and pretraining steps can be found in [this repository](https://github.com/stefan-it/europeana-bert). ## Model weights ELECTRA model weights for PyTorch and TensorFlow are available. * French Europeana ELECTRA (discriminator): `dbmdz/electra-base-french-europeana-cased-discriminator` - [model hub page](https://huggingface.co/dbmdz/electra-base-french-europeana-cased-discriminator/tree/main) * French Europeana ELECTRA (generator): `dbmdz/electra-base-french-europeana-cased-generator` - [model hub page](https://huggingface.co/dbmdz/electra-base-french-europeana-cased-generator/tree/main) ## Results For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert). ## Usage With Transformers >= 2.3 our French Europeana ELECTRA model can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-french-europeana-cased-discriminator") model = AutoModel.from_pretrained("dbmdz/electra-base-french-europeana-cased-discriminator") ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our ELECTRA models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download our models from their S3 storage 🤗
{"language": "fr", "license": "mit", "tags": ["historic french"]}
dbmdz/electra-base-french-europeana-cased-generator
null
[ "transformers", "pytorch", "tf", "safetensors", "electra", "fill-mask", "historic french", "fr", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
dbmdz/electra-base-german-europeana-cased-discriminator
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
dbmdz/electra-base-german-europeana-cased-generator
null
[ "transformers", "pytorch", "tf", "safetensors", "electra", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
dbmdz/electra-base-italian-mc4-cased-discriminator
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
dbmdz/electra-base-italian-mc4-cased-generator
null
[ "transformers", "pytorch", "tf", "safetensors", "electra", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# 🤗 + 📚 dbmdz BERT and ELECTRA models In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources Italian BERT and ELECTRA models 🎉 # Italian BERT The source data for the Italian BERT model consists of a recent Wikipedia dump and various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final training corpus has a size of 13GB and 2,050,057,573 tokens. For sentence splitting, we use NLTK (faster compared to spacy). Our cased and uncased models are training with an initial sequence length of 512 subwords for ~2-3M steps. For the XXL Italian models, we use the same training data from OPUS and extend it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/). Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens. Note: Unfortunately, a wrong vocab size was used when training the XXL models. This explains the mismatch of the "real" vocab size of 31102, compared to the vocab size specified in `config.json`. However, the model is working and all evaluations were done under those circumstances. See [this issue](https://github.com/dbmdz/berts/issues/7) for more information. The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch size of 128. We pretty much following the ELECTRA training procedure as used for [BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra). ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | ---------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt) | `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt) | `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt) | `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt) ## Results For results on downstream tasks like NER or PoS tagging, please refer to [this repository](https://github.com/stefan-it/italian-bertelectra). ## Usage With Transformers >= 2.3 our Italian BERT models can be loaded like: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the (recommended) Italian XXL BERT models, just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-xxl-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the Italian XXL ELECTRA model (discriminator), just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelWithLMHead.from_pretrained(model_name) ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT/ELECTRA models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "it", "license": "mit", "datasets": ["wikipedia"]}
dbmdz/electra-base-italian-xxl-cased-discriminator
null
[ "transformers", "pytorch", "electra", "pretraining", "it", "dataset:wikipedia", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# 🤗 + 📚 dbmdz BERT and ELECTRA models In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources Italian BERT and ELECTRA models 🎉 # Italian BERT The source data for the Italian BERT model consists of a recent Wikipedia dump and various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final training corpus has a size of 13GB and 2,050,057,573 tokens. For sentence splitting, we use NLTK (faster compared to spacy). Our cased and uncased models are training with an initial sequence length of 512 subwords for ~2-3M steps. For the XXL Italian models, we use the same training data from OPUS and extend it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/). Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens. Note: Unfortunately, a wrong vocab size was used when training the XXL models. This explains the mismatch of the "real" vocab size of 31102, compared to the vocab size specified in `config.json`. However, the model is working and all evaluations were done under those circumstances. See [this issue](https://github.com/dbmdz/berts/issues/7) for more information. The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch size of 128. We pretty much following the ELECTRA training procedure as used for [BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra). ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | ---------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt) | `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt) | `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt) | `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt) ## Results For results on downstream tasks like NER or PoS tagging, please refer to [this repository](https://github.com/stefan-it/italian-bertelectra). ## Usage With Transformers >= 2.3 our Italian BERT models can be loaded like: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the (recommended) Italian XXL BERT models, just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-xxl-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the Italian XXL ELECTRA model (discriminator), just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelWithLMHead.from_pretrained(model_name) ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT/ELECTRA models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "it", "license": "mit", "datasets": ["wikipedia"]}
dbmdz/electra-base-italian-xxl-cased-generator
null
[ "transformers", "pytorch", "safetensors", "electra", "fill-mask", "it", "dataset:wikipedia", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# 🤗 + 📚 dbmdz Turkish ELECTRA model In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources a cased ELECTRA base model for Turkish 🎉 # Turkish ELECTRA model We release a base ELEC**TR**A model for Turkish, that was trained on the same data as *BERTurk*. > ELECTRA is a new method for self-supervised language representation learning. It can be used to > pre-train transformer networks using relatively little compute. ELECTRA models are trained to > distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to > the discriminator of a GAN. More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB) or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub. ## Stats The current version of the model is trained on a filtered and sentence segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/), a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/). The final training corpus has a size of 35GB and 44,04,976,662 tokens. Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model on a TPU v3-8 for 1M steps. ## Model weights [Transformers](https://github.com/huggingface/transformers) compatible weights for both PyTorch and TensorFlow are available. | Model | Downloads | ------------------------------------------------ | --------------------------------------------------------------------------------------------------------------- | `dbmdz/electra-base-turkish-cased-discriminator` | [`config.json`](https://cdn.huggingface.co/dbmdz/electra-base-turkish-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-turkish-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-turkish-cased-discriminator/vocab.txt) ## Usage With Transformers >= 2.8 our ELECTRA base cased model can be loaded like: ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-cased-discriminator") model = AutoModelWithLMHead.from_pretrained("dbmdz/electra-base-turkish-cased-discriminator") ``` ## Results For results on PoS tagging or NER tasks, please refer to [this repository](https://github.com/stefan-it/turkish-bert/electra). # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our ELECTRA models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "tr", "license": "mit"}
dbmdz/electra-base-turkish-cased-discriminator
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "tr", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
dbmdz/electra-base-turkish-cased-generator
null
[ "transformers", "pytorch", "tf", "safetensors", "electra", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
dbmdz/electra-base-turkish-cased-v0-discriminator
null
[ "transformers", "pytorch", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
dbmdz/electra-base-turkish-cased-v0-generator
null
[ "transformers", "pytorch", "safetensors", "electra", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# 🇹🇷 Turkish ELECTRA model <p align="center"> <img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png"> </p> [![DOI](https://zenodo.org/badge/237817454.svg)](https://zenodo.org/badge/latestdoi/237817454) We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉 Some datasets used for pretraining and evaluation are contributed from the awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk. Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann). # Stats We've also trained an ELECTRA (cased) model on the recently released Turkish part of the [multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team. After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting in 31,240,963,926 tokens. We used the original 32k vocab (instead of creating a new one). # mC4 ELECTRA In addition to the ELEC**TR**A base model, we also trained an ELECTRA model on the Turkish part of the mC4 corpus. We use a sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU. # Model usage All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz) using their model name. Example usage with 🤗/Transformers: ```python tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-discriminator") model = AutoModel.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-discriminator") ``` # Citation You can use the following BibTeX entry for citation: ```bibtex @software{stefan_schweter_2020_3770924, author = {Stefan Schweter}, title = {BERTurk - BERT models for Turkish}, month = apr, year = 2020, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.3770924}, url = {https://doi.org/10.5281/zenodo.3770924} } ``` # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the awesome logo! Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️
{"language": "tr", "license": "mit", "datasets": ["allenai/c4"]}
dbmdz/electra-base-turkish-mc4-cased-discriminator
null
[ "transformers", "pytorch", "tf", "tensorboard", "electra", "pretraining", "tr", "dataset:allenai/c4", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# 🇹🇷 Turkish ELECTRA model <p align="center"> <img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png"> </p> [![DOI](https://zenodo.org/badge/237817454.svg)](https://zenodo.org/badge/latestdoi/237817454) We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉 Some datasets used for pretraining and evaluation are contributed from the awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk. Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann). # Stats We've also trained an ELECTRA (cased) model on the recently released Turkish part of the [multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team. After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting in 31,240,963,926 tokens. We used the original 32k vocab (instead of creating a new one). # mC4 ELECTRA In addition to the ELEC**TR**A base model, we also trained an ELECTRA model on the Turkish part of the mC4 corpus. We use a sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU. # Model usage All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz) using their model name. Example usage with 🤗/Transformers: ```python tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-generator") model = AutoModel.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-generator") ``` # Citation You can use the following BibTeX entry for citation: ```bibtex @software{stefan_schweter_2020_3770924, author = {Stefan Schweter}, title = {BERTurk - BERT models for Turkish}, month = apr, year = 2020, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.3770924}, url = {https://doi.org/10.5281/zenodo.3770924} } ``` # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the awesome logo! Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️
{"language": "tr", "license": "mit", "datasets": ["allenai/c4"], "widget": [{"text": "[MASK] s\u00f6zc\u00fc\u011f\u00fc T\u00fcrk\u00e7e k\u00f6kenlidir"}]}
dbmdz/electra-base-turkish-mc4-cased-generator
null
[ "transformers", "pytorch", "tf", "safetensors", "electra", "fill-mask", "tr", "dataset:allenai/c4", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# 🇹🇷 Turkish ELECTRA model <p align="center"> <img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png"> </p> [![DOI](https://zenodo.org/badge/237817454.svg)](https://zenodo.org/badge/latestdoi/237817454) We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉 Some datasets used for pretraining and evaluation are contributed from the awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk. Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann). # Stats We've also trained an ELECTRA (uncased) model on the recently released Turkish part of the [multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team. After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting in 31,240,963,926 tokens. We used the original 32k vocab (instead of creating a new one). # mC4 ELECTRA In addition to the ELEC**TR**A base cased model, we also trained an ELECTRA uncased model on the Turkish part of the mC4 corpus. We use a sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU. # Model usage All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz) using their model name. Example usage with 🤗/Transformers: ```python tokenizer = AutoTokenizer.from_pretrained("electra-base-turkish-mc4-uncased-discriminator") model = AutoModel.from_pretrained("electra-base-turkish-mc4-uncased-discriminator") ``` # Citation You can use the following BibTeX entry for citation: ```bibtex @software{stefan_schweter_2020_3770924, author = {Stefan Schweter}, title = {BERTurk - BERT models for Turkish}, month = apr, year = 2020, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.3770924}, url = {https://doi.org/10.5281/zenodo.3770924} } ``` # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the awesome logo! Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️
{"language": "tr", "license": "mit", "datasets": ["allenai/c4"]}
dbmdz/electra-base-turkish-mc4-uncased-discriminator
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "tr", "dataset:allenai/c4", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# 🇹🇷 Turkish ELECTRA model <p align="center"> <img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png"> </p> [![DOI](https://zenodo.org/badge/237817454.svg)](https://zenodo.org/badge/latestdoi/237817454) We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉 Some datasets used for pretraining and evaluation are contributed from the awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk. Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann). # Stats We've also trained an ELECTRA (uncased) model on the recently released Turkish part of the [multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team. After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting in 31,240,963,926 tokens. We used the original 32k vocab (instead of creating a new one). # mC4 ELECTRA In addition to the ELEC**TR**A base cased model, we also trained an ELECTRA uncased model on the Turkish part of the mC4 corpus. We use a sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU. # Model usage All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz) using their model name. Example usage with 🤗/Transformers: ```python tokenizer = AutoTokenizer.from_pretrained("electra-base-turkish-mc4-uncased-generator") model = AutoModel.from_pretrained("electra-base-turkish-mc4-uncased-generator") ``` # Citation You can use the following BibTeX entry for citation: ```bibtex @software{stefan_schweter_2020_3770924, author = {Stefan Schweter}, title = {BERTurk - BERT models for Turkish}, month = apr, year = 2020, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.3770924}, url = {https://doi.org/10.5281/zenodo.3770924} } ``` # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the awesome logo! Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️
{"language": "tr", "license": "mit", "datasets": ["allenai/c4"]}
dbmdz/electra-base-turkish-mc4-uncased-generator
null
[ "transformers", "pytorch", "tf", "electra", "fill-mask", "tr", "dataset:allenai/c4", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{"language": ["uk"], "license": "mit"}
lang-uk/electra-base-ukrainian-cased-discriminator
null
[ "transformers", "pytorch", "electra", "pretraining", "uk", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{"language": ["uk"], "license": "mit"}
lang-uk/electra-base-ukrainian-cased-generator
null
[ "transformers", "pytorch", "safetensors", "electra", "fill-mask", "uk", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
{}
dbmdz/electra-large-discriminator-finetuned-conll03-english
null
[ "transformers", "pytorch", "safetensors", "electra", "token-classification", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# 🤗 + 📚 dbmdz Turkish ELECTRA model In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources a cased ELECTRA small model for Turkish 🎉 # Turkish ELECTRA model We release a small ELEC**TR**A model for Turkish, that was trained on the same data as *BERTurk*. > ELECTRA is a new method for self-supervised language representation learning. It can be used to > pre-train transformer networks using relatively little compute. ELECTRA models are trained to > distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to > the discriminator of a GAN. More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB) or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub. ## Stats The current version of the model is trained on a filtered and sentence segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/), a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/). The final training corpus has a size of 35GB and 44,04,976,662 tokens. Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model on a TPU v3-8 for 1M steps. ## Model weights [Transformers](https://github.com/huggingface/transformers) compatible weights for both PyTorch and TensorFlow are available. | Model | Downloads | ------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/electra-small-turkish-cased-discriminator` | [`config.json`](https://cdn.huggingface.co/dbmdz/electra-small-turkish-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-small-turkish-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-small-turkish-cased-discriminator/vocab.txt) ## Usage With Transformers >= 2.8 our ELECTRA small cased model can be loaded like: ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-small-turkish-cased-discriminator") model = AutoModelWithLMHead.from_pretrained("dbmdz/electra-small-turkish-cased-discriminator") ``` ## Results For results on PoS tagging or NER tasks, please refer to [this repository](https://github.com/stefan-it/turkish-bert/electra). # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our ELECTRA models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "tr", "license": "mit"}
dbmdz/electra-small-turkish-cased-discriminator
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "tr", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
dbmdz/electra-small-turkish-cased-generator
null
[ "transformers", "pytorch", "tf", "safetensors", "electra", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
# Triple E - Effective Ensembling of Embeddings and Language Models for NER of Historical German Based on [our paper](http://ceur-ws.org/Vol-2696/paper_173.pdf) we release a new baseline model for the German [CLEF-HIPE shared task](https://impresso.github.io/CLEF-HIPE-2020/). In contrast to the models used in the paper, we manually sentence-segmented and normalize hyphenations and trained a NER model using the German Europeana BERT model. Additionally, we perform experiments with different context sizes. This approach is described in more detail in [this paper](https://arxiv.org/abs/2011.06993). # Results The results with different context sizes can be seen in the following table: | Model | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | -------------------------- | --------------- | --------------- | --------------- | ------------------- | --------------- | --------------- | German Europeana BERT | (81.45) / 76.92 | (**81.53**) / 77.03 | (80.49) / 77.83 | (80.88) / 77.19 | (81.39) / 77.00 | (81.15 ± 0.45) / 77.19 ± 0.34 | German Europeana BERT (16) | (**82.56**) / 77.38 | (81.19) / 77.76 | (80.99) / 76.34 | (81.27) / 77.70 | (81.28) / 77.22 | (81.46 ± 0.63) / 77.28 ± 0.57 | German Europeana BERT (32) | (**82.04**) / 78.50 | (81.14) / 76.56 | (81.81) / 78.28 | (81.50) / 76.90 | (81.64) / 77.94 | (81.63 ± 0.34) / 77.64 ± 0.86 | German Europeana BERT (64) | (81.21) / 78.39 | (81.27) / 75.98 | (**81.88**) / 78.40 | (81.66) / 77.35 | (81.29) / 76.70 | (81.46 ± 0.29) / 77.36 ± 1.06 | German Europeana BERT (80) | (82.13) / 77.77 | (81.31) / 76.81 | (82.09) / 78.69 | (**82.30**) / 76.79 | (80.65) / 77.10 | (81.70 ± 0.70) / 77.43 ± 0.81 For model upload, we choose the best model on development score: 82.56 with a context length of 16. ## Comparisons The following figure shows the results with different context sized (on development dataset): ![German CLEF-HIPE Development Results](figures/clef_hipe_f1_score_development.png) We perform "Almost Stochastic Order" tests as proposed in the ["Deep Dominance - How to Properly Compare Deep Neural Models"](https://www.aclweb.org/anthology/P19-1266/) paper. The heatmap figure is heavily inspired by the ["CharacterBERT"](https://arxiv.org/abs/2010.10392) paper. ![Almost Stochastic Order Tests on Development set](figures/clef_hipe_asd_development.png)
{"language": "de", "license": "mit", "tags": ["flair", "token-classification", "sequence-tagger-model"], "widget": [{"text": "Herr Oberst Brunner ist n\u00e4mlich Hauptagent f\u00fcr den Kanton Z\u00fcrich."}]}
dbmdz/flair-clef-hipe-german-base
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "arxiv:2011.06993", "arxiv:2010.10392", "license:mit", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
# Flair NER model trained on GermEval14 dataset This model was trained on the official [GermEval14](https://sites.google.com/site/germeval2014ner/data) dataset using the [Flair](https://github.com/flairNLP/flair) framework. It uses a fine-tuned German DistilBERT model from [here](https://huggingface.co/distilbert-base-german-cased). # Results | Dataset \ Run | Run 1 | Run 2 | Run 3† | Run 4 | Run 5 | Avg. | ------------- | ----- | ----- | --------- | ----- | ----- | ---- | Development | 87.05 | 86.52 | **87.34** | 86.85 | 86.46 | 86.84 | Test | 85.43 | 85.88 | 85.72 | 85.47 | 85.62 | 85.62 † denotes that this model is selected for upload. # Flair Fine-Tuning We used the following script to fine-tune the model on the GermEval14 dataset: ```python from argparse import ArgumentParser import torch, flair # dataset, model and embedding imports from flair.datasets import GERMEVAL_14 from flair.embeddings import TransformerWordEmbeddings from flair.models import SequenceTagger from flair.trainers import ModelTrainer if __name__ == "__main__": # All arguments that can be passed parser = ArgumentParser() parser.add_argument("-s", "--seeds", nargs='+', type=int, default='42') # pass list of seeds for experiments parser.add_argument("-c", "--cuda", type=int, default=0, help="CUDA device") # which cuda device to use parser.add_argument("-m", "--model", type=str, help="Model name (such as Hugging Face model hub name") # Parse experimental arguments args = parser.parse_args() # use cuda device as passed flair.device = f'cuda:{str(args.cuda)}' # for each passed seed, do one experimental run for seed in args.seeds: flair.set_seed(seed) # model hf_model = args.model # initialize embeddings embeddings = TransformerWordEmbeddings( model=hf_model, layers="-1", subtoken_pooling="first", fine_tune=True, use_context=False, respect_document_boundaries=False, ) # select dataset depending on which language variable is passed corpus = GERMEVAL_14() # make the dictionary of tags to predict tag_dictionary = corpus.make_tag_dictionary('ner') # init bare-bones sequence tagger (no reprojection, LSTM or CRF) tagger: SequenceTagger = SequenceTagger( hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type='ner', use_crf=False, use_rnn=False, reproject_embeddings=False, ) # init the model trainer trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW) # make string for output folder output_folder = f"flert-ner-{hf_model}-{seed}" # train with XLM parameters (AdamW, 20 epochs, small LR) from torch.optim.lr_scheduler import OneCycleLR trainer.train( output_folder, learning_rate=5.0e-5, mini_batch_size=16, mini_batch_chunk_size=1, max_epochs=10, scheduler=OneCycleLR, embeddings_storage_mode='none', weight_decay=0., train_with_dev=False, ) ```
{"language": "de", "license": "mit", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["germeval_14"], "widget": [{"text": "Hugging Face ist eine franz\u00f6sische Firma mit Sitz in New York."}]}
stefan-it/flair-distilbert-ner-germeval14
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "dataset:germeval_14", "license:mit", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
# Towards Robust Named Entity Recognition for Historic German Based on [our paper](https://www.aclweb.org/anthology/W19-4312/) we release a new model trained on the LFT dataset. **Note:** We use BPEmbeddings instead of the combination of Wikipedia, Common Crawl and character embeddings (as used in the paper), so save space and training/inferencing time. # Results | Dataset \ Run | Run 1 | Run 2 | Run 3† | Avg. | ------------- | ----- | ----- | --------- | ------------ | Development | 76.32 | 76.13 | **76.36** | 76.27 | Test | 77.07 | 77.35 | 77.20 | 77.21 Paper reported an averaged F1-score of 77.51. † denotes that this model is selected for upload.
{"language": "de", "license": "mit", "tags": ["flair", "token-classification", "sequence-tagger-model"], "inference": false}
dbmdz/flair-historic-ner-lft
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "license:mit", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
flair
# Towards Robust Named Entity Recognition for Historic German Based on [our paper](https://www.aclweb.org/anthology/W19-4312/) we release a new model trained on the ONB dataset. **Note:** We use BPEmbeddings instead of the combination of Wikipedia, Common Crawl and character embeddings (as used in the paper), so save space and training/inferencing time. # Results | Dataset \ Run | Run 1 | Run 2 | Run 3 | Avg. | ------------- | ----- | ----- | --------- | ------------ | Development | 86.69 | 86.13 | **87.18** | 86.67 | Test | 85.27 | 86.05 | 85.75† | 85.69 Paper reported an averaged F1-score of 85.31. † denotes that this model is selected for upload.
{"language": "de", "license": "mit", "tags": ["flair", "token-classification", "sequence-tagger-model"], "widget": [{"text": "April Martin Ansclm, K. Gefangen-Auffehers Georg Sausgruber."}]}
dbmdz/flair-historic-ner-onb
null
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "license:mit", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# German GPT-2 model In this repository we release (yet another) GPT-2 model, that was trained on various texts for German. The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model 😉 **Note**: The model was initially released under an anonymous alias (`anonymous-german-nlp/german-gpt2`) so we now "de-anonymize" it. More details about GPT-2 can be found in the great [Hugging Face](https://huggingface.co/transformers/model_doc/gpt2.html) documentation. ## German GPT-2 fine-tuned on Faust I and II We fine-tuned our German GPT-2 model on "Faust I and II" from Johann Wolfgang Goethe. These texts can be obtained from [Deutsches Textarchiv (DTA)](http://www.deutschestextarchiv.de/book/show/goethe_faust01_1808). We use the "normalized" version of both texts (to avoid out-of-vocabulary problems with e.g. "ſ") Fine-Tuning was done for 100 epochs, using a batch size of 4 with half precision on a RTX 3090. Total time was around 12 minutes (it is really fast!). We also open source this fine-tuned model. Text can be generated with: ```python from transformers import pipeline pipe = pipeline('text-generation', model="dbmdz/german-gpt2-faust", tokenizer="dbmdz/german-gpt2-faust") text = pipe("Schon um die Liebe", max_length=100)[0]["generated_text"] print(text) ``` and could output: ``` Schon um die Liebe bitte ich, Herr! Wer mag sich die dreifach Ermächtigen? Sei mir ein Held! Und daß die Stunde kommt spreche ich nicht aus. Faust (schaudernd). Den schönen Boten finde' ich verwirrend; ``` # License All models are licensed under [MIT](LICENSE). # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/stefan-it/german-gpt/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "de", "license": "mit", "widget": [{"text": "Schon um die Liebe"}]}
dbmdz/german-gpt2-faust
null
[ "transformers", "pytorch", "jax", "safetensors", "gpt2", "text-generation", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# German GPT-2 model In this repository we release (yet another) GPT-2 model, that was trained on various texts for German. The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model 😉 **Note**: The model was initially released under an anonymous alias (`anonymous-german-nlp/german-gpt2`) so we now "de-anonymize" it. More details about GPT-2 can be found in the great [Hugging Face](https://huggingface.co/transformers/model_doc/gpt2.html) documentation. # Changelog 16.08.2021: Public release of re-trained version of our German GPT-2 model with better results. 15.11.2020: Initial release. Please use the tag `v1.0` for [this older version](https://huggingface.co/dbmdz/german-gpt2/tree/v1.0). # Training corpora We use pretty much the same corpora as used for training the DBMDZ BERT model, that can be found in [this repository](https://github.com/dbmdz/berts). Thanks to the awesome Hugging Face team, it is possible to create byte-level BPE with their awesome [Tokenizers](https://github.com/huggingface/tokenizers) library. With the previously mentioned awesome Tokenizers library we created a 50K byte-level BPE vocab based on the training corpora. After creating the vocab, we could train the GPT-2 for German on a v3-8 TPU over the complete training corpus for 20 epochs. All hyperparameters can be found in the official JAX/FLAX documentation [here](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/README.md) from Transformers. # Using the model The model itself can be used in this way: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("dbmdz/german-gpt2") model = AutoModelWithLMHead.from_pretrained("dbmdz/german-gpt2") ``` However, text generation is a bit more interesting, so here's an example that shows how to use the great Transformers *Pipelines* for generating text: ```python from transformers import pipeline pipe = pipeline('text-generation', model="dbmdz/german-gpt2", tokenizer="dbmdz/german-gpt2") text = pipe("Der Sinn des Lebens ist es", max_length=100)[0]["generated_text"] print(text) ``` This could output this beautiful text: ``` Der Sinn des Lebens ist es, im Geist zu verweilen, aber nicht in der Welt zu sein, sondern ganz im Geist zu leben. Die Menschen beginnen, sich nicht nach der Natur und nach der Welt zu richten, sondern nach der Seele,' ``` # License All models are licensed under [MIT](LICENSE). # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/stefan-it/german-gpt/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "de", "license": "mit", "widget": [{"text": "Heute ist sehr sch\u00f6nes Wetter in"}]}
dbmdz/german-gpt2
null
[ "transformers", "pytorch", "tf", "jax", "onnx", "safetensors", "gpt2", "text-generation", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# T5 Base Model for Named Entity Recognition (NER, CoNLL-2003) In this repository, we open source a T5 Base model, that was fine-tuned on the official CoNLL-2003 NER dataset. We use the great [TANL library](https://github.com/amazon-research/tanl) from Amazon for fine-tuning the model. The exact approach of fine-tuning is presented in the "TANL: Structured Prediction as Translation between Augmented Natural Languages" paper from Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, Rishita Anubhai, Cicero Nogueira dos Santos, Bing Xiang and Stefano Soatto. # Fine-Tuning We use the same hyper-parameter settings as used in the official implementation with one minor change. Instead of using 8 V100 GPUs, we train the model on one V100 GPU and used gradient accumulation. The slighly modified configuration file (`config.ini`) then looks like: ```ini [conll03] datasets = conll03 model_name_or_path = t5-base num_train_epochs = 10 max_seq_length = 256 max_seq_length_eval = 512 per_device_train_batch_size = 4 per_device_eval_batch_size = 4 do_train = True do_eval = True do_predict = True gradient_accumulation_steps = 8 ``` It took around 2 hours to fine-tune that model on the 14,041 training sentences of CoNLL-2003 dataset. # Evaluation On the development set, the following evaluation results could be achieved: ```json { "entity_precision": 0.9536446086664427, "entity_recall": 0.9555705149781218, "entity_f1": 0.9546065904505716, "entity_precision_no_type": 0.9773261672824992, "entity_recall_no_type": 0.9792998990238977, "entity_f1_no_type": 0.9783120376597176 } ``` The evaluation results on the test set looks like: ```json { "entity_precision": 0.912182296231376, "entity_recall": 0.9213881019830028, "entity_f1": 0.9167620893155995, "entity_precision_no_type": 0.953900087642419, "entity_recall_no_type": 0.9635269121813032, "entity_f1_no_type": 0.9586893332158901 } ``` To summarize: On the development set, 95.46% F1-Score and 91.68% on test set were achieved with this model. The paper reported a F1-Score of 91.7%. # License The models is licensed under [MIT](https://choosealicense.com/licenses/mit/). # Acknowledgments Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
{"language": "en", "license": "mit", "datasets": ["conll2003"], "widget": [{"text": "My name is Clara Clever and I live in Berkeley , California ."}]}
dbmdz/t5-base-conll03-english
null
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "en", "dataset:conll2003", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
Masked Language Model trained on the articles and talks of Noam Chomsky.
{}
dbragdon/noam-masked-lm
null
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
Language model fine-tuned on the articles and speeches of Noam Chomsky.
{}
dbragdon/noamlm
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dbsamu/bert-base-uncased-finetuned-ner
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
{}
dbsamu/deberta-base-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "deberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.2781 - Precision: 0.8121 - Recall: 0.8302 - F1: 0.8210 - Accuracy: 0.9204 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3504 | 1.0 | 1250 | 0.2922 | 0.7930 | 0.8075 | 0.8002 | 0.9115 | | 0.2353 | 2.0 | 2500 | 0.2711 | 0.8127 | 0.8264 | 0.8195 | 0.9196 | | 0.1745 | 3.0 | 3750 | 0.2781 | 0.8121 | 0.8302 | 0.8210 | 0.9204 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "en"}, "metrics": [{"type": "precision", "value": 0.8120642485217545, "name": "Precision"}, {"type": "recall", "value": 0.830235495804385, "name": "Recall"}, {"type": "f1", "value": 0.8210493441599, "name": "F1"}, {"type": "accuracy", "value": 0.9203828724683252, "name": "Accuracy"}]}]}]}
dbsamu/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:wikiann", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
{}
dbsamu/distilroberta-base-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-small-discriminator-finetuned-ner This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.3685 - Precision: 0.7331 - Recall: 0.7543 - F1: 0.7435 - Accuracy: 0.8883 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.5465 | 1.0 | 1250 | 0.4158 | 0.6932 | 0.7201 | 0.7064 | 0.8735 | | 0.4037 | 2.0 | 2500 | 0.3817 | 0.7191 | 0.7470 | 0.7328 | 0.8828 | | 0.3606 | 3.0 | 3750 | 0.3685 | 0.7331 | 0.7543 | 0.7435 | 0.8883 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "electra-small-discriminator-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "en"}, "metrics": [{"type": "precision", "value": 0.7330965535385425, "name": "Precision"}, {"type": "recall", "value": 0.7542632861138681, "name": "Recall"}, {"type": "f1", "value": 0.7435293071244329, "name": "F1"}, {"type": "accuracy", "value": 0.8883011190233978, "name": "Accuracy"}]}]}]}
dbsamu/electra-small-discriminator-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "electra", "token-classification", "generated_from_trainer", "dataset:wikiann", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dbsamu/roberta-base-finetuned-ner
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dbustosp/codeparrot-ds
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# BETO: Spanish BERT BETO is a [BERT model](https://github.com/google-research/bert) trained on a [big Spanish corpus](https://github.com/josecannete/spanish-corpora). BETO is of size similar to a BERT-Base and was trained with the Whole Word Masking technique. Below you find Tensorflow and Pytorch checkpoints for the uncased and cased versions, as well as some results for Spanish benchmarks comparing BETO with [Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) as well as other (not BERT-based) models. ## Download | | | | | |-|:--------:|:-----:|:----:| |BETO uncased|[tensorflow_weights](https://users.dcc.uchile.cl/~jperez/beto/uncased_2M/tensorflow_weights.tar.gz) | [pytorch_weights](https://users.dcc.uchile.cl/~jperez/beto/uncased_2M/pytorch_weights.tar.gz) | [vocab](./config/uncased_2M/vocab.txt), [config](./config/uncased_2M/config.json) | |BETO cased| [tensorflow_weights](https://users.dcc.uchile.cl/~jperez/beto/cased_2M/tensorflow_weights.tar.gz) | [pytorch_weights](https://users.dcc.uchile.cl/~jperez/beto/cased_2M/pytorch_weights.tar.gz) | [vocab](./config/cased_2M/vocab.txt), [config](./config/cased_2M/config.json) | All models use a vocabulary of about 31k BPE subwords constructed using SentencePiece and were trained for 2M steps. ## Benchmarks The following table shows some BETO results in the Spanish version of every task. We compare BETO (cased and uncased) with the Best Multilingual BERT results that we found in the literature (as of October 2019). The table also shows some alternative methods for the same tasks (not necessarily BERT-based methods). References for all methods can be found [here](#references). |Task | BETO-cased | BETO-uncased | Best Multilingual BERT | Other results | |-------|--------------:|--------------:|--------------------------:|-------------------------------:| |[POS](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-1827) | **98.97** | 98.44 | 97.10 [2] | 98.91 [6], 96.71 [3] | |[NER-C](https://www.kaggle.com/nltkdata/conll-corpora) | [**88.43**](https://github.com/gchaperon/beto-benchmarks/blob/master/conll2002/dev_results_beto-cased_conll2002.txt) | 82.67 | 87.38 [2] | 87.18 [3] | |[MLDoc](https://github.com/facebookresearch/MLDoc) | [95.60](https://github.com/gchaperon/beto-benchmarks/blob/master/MLDoc/dev_results_beto-cased_mldoc.txt) | [**96.12**](https://github.com/gchaperon/beto-benchmarks/blob/master/MLDoc/dev_results_beto-uncased_mldoc.txt) | 95.70 [2] | 88.75 [4] | |[PAWS-X](https://github.com/google-research-datasets/paws/tree/master/pawsx) | 89.05 | 89.55 | 90.70 [8] | |[XNLI](https://github.com/facebookresearch/XNLI) | **82.01** | 80.15 | 78.50 [2] | 80.80 [5], 77.80 [1], 73.15 [4]| ## Example of use For further details on how to use BETO you can visit the [🤗Huggingface Transformers library](https://github.com/huggingface/transformers), starting by the [Quickstart section](https://huggingface.co/transformers/quickstart.html). BETO models can be accessed simply as [`'dccuchile/bert-base-spanish-wwm-cased'`](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) and [`'dccuchile/bert-base-spanish-wwm-uncased'`](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) by using the Transformers library. An example on how to download and use the models in this page can be found in [this colab notebook](https://colab.research.google.com/drive/1pYOYsCU59GBOwztkWCw5PTsqBiJbRy4S?usp=sharing). (We will soon add a more detailed step-by-step tutorial in Spanish for newcommers 😉) ## Acknowledgments We thank [Adereso](https://www.adere.so/) for kindly providing support for traininig BETO-uncased, and the [Millennium Institute for Foundational Research on Data](https://imfd.cl/en/) that provided support for training BETO-cased. Also thanks to Google for helping us with the [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) program. ## Citation [Spanish Pre-Trained BERT Model and Evaluation Data](https://users.dcc.uchile.cl/~jperez/papers/pml4dc2020.pdf) To cite this resource in a publication please use the following: ``` @inproceedings{CaneteCFP2020, title={Spanish Pre-Trained BERT Model and Evaluation Data}, author={Cañete, José and Chaperon, Gabriel and Fuentes, Rodrigo and Ho, Jou-Hui and Kang, Hojin and Pérez, Jorge}, booktitle={PML4DC at ICLR 2020}, year={2020} } ``` ## License Disclaimer The license CC BY 4.0 best describes our intentions for our work. However we are not sure that all the datasets used to train BETO have licenses compatible with CC BY 4.0 (specially for commercial use). Please use at your own discretion and verify that the licenses of the original text resources match your needs. ## References * [1] [Original Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) * [2] [Multilingual BERT on "Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT"](https://arxiv.org/pdf/1904.09077.pdf) * [3] [Multilingual BERT on "How Multilingual is Multilingual BERT?"](https://arxiv.org/pdf/1906.01502.pdf) * [4] [LASER](https://arxiv.org/abs/1812.10464) * [5] [XLM (MLM+TLM)](https://arxiv.org/pdf/1901.07291.pdf) * [6] [UDPipe on "75 Languages, 1 Model: Parsing Universal Dependencies Universally"](https://arxiv.org/pdf/1904.02099.pdf) * [7] [Multilingual BERT on "Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation"](https://arxiv.org/pdf/1906.01569.pdf) * [8] [Multilingual BERT on "PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification"](https://arxiv.org/abs/1908.11828)
{"language": ["es"], "tags": ["masked-lm"]}
dccuchile/bert-base-spanish-wwm-cased
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "masked-lm", "es", "arxiv:1904.09077", "arxiv:1906.01502", "arxiv:1812.10464", "arxiv:1901.07291", "arxiv:1904.02099", "arxiv:1906.01569", "arxiv:1908.11828", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# BETO: Spanish BERT BETO is a [BERT model](https://github.com/google-research/bert) trained on a [big Spanish corpus](https://github.com/josecannete/spanish-corpora). BETO is of size similar to a BERT-Base and was trained with the Whole Word Masking technique. Below you find Tensorflow and Pytorch checkpoints for the uncased and cased versions, as well as some results for Spanish benchmarks comparing BETO with [Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) as well as other (not BERT-based) models. ## Download | | | | | |-|:--------:|:-----:|:----:| |BETO uncased|[tensorflow_weights](https://users.dcc.uchile.cl/~jperez/beto/uncased_2M/tensorflow_weights.tar.gz) | [pytorch_weights](https://users.dcc.uchile.cl/~jperez/beto/uncased_2M/pytorch_weights.tar.gz) | [vocab](./config/uncased_2M/vocab.txt), [config](./config/uncased_2M/config.json) | |BETO cased| [tensorflow_weights](https://users.dcc.uchile.cl/~jperez/beto/cased_2M/tensorflow_weights.tar.gz) | [pytorch_weights](https://users.dcc.uchile.cl/~jperez/beto/cased_2M/pytorch_weights.tar.gz) | [vocab](./config/cased_2M/vocab.txt), [config](./config/cased_2M/config.json) | All models use a vocabulary of about 31k BPE subwords constructed using SentencePiece and were trained for 2M steps. ## Benchmarks The following table shows some BETO results in the Spanish version of every task. We compare BETO (cased and uncased) with the Best Multilingual BERT results that we found in the literature (as of October 2019). The table also shows some alternative methods for the same tasks (not necessarily BERT-based methods). References for all methods can be found [here](#references). |Task | BETO-cased | BETO-uncased | Best Multilingual BERT | Other results | |-------|--------------:|--------------:|--------------------------:|-------------------------------:| |[POS](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-1827) | **98.97** | 98.44 | 97.10 [2] | 98.91 [6], 96.71 [3] | |[NER-C](https://www.kaggle.com/nltkdata/conll-corpora) | [**88.43**](https://github.com/gchaperon/beto-benchmarks/blob/master/conll2002/dev_results_beto-cased_conll2002.txt) | 82.67 | 87.38 [2] | 87.18 [3] | |[MLDoc](https://github.com/facebookresearch/MLDoc) | [95.60](https://github.com/gchaperon/beto-benchmarks/blob/master/MLDoc/dev_results_beto-cased_mldoc.txt) | [**96.12**](https://github.com/gchaperon/beto-benchmarks/blob/master/MLDoc/dev_results_beto-uncased_mldoc.txt) | 95.70 [2] | 88.75 [4] | |[PAWS-X](https://github.com/google-research-datasets/paws/tree/master/pawsx) | 89.05 | 89.55 | 90.70 [8] | |[XNLI](https://github.com/facebookresearch/XNLI) | **82.01** | 80.15 | 78.50 [2] | 80.80 [5], 77.80 [1], 73.15 [4]| ## Example of use For further details on how to use BETO you can visit the [🤗Huggingface Transformers library](https://github.com/huggingface/transformers), starting by the [Quickstart section](https://huggingface.co/transformers/quickstart.html). BETO models can be accessed simply as [`'dccuchile/bert-base-spanish-wwm-cased'`](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) and [`'dccuchile/bert-base-spanish-wwm-uncased'`](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) by using the Transformers library. An example on how to download and use the models in this page can be found in [this colab notebook](https://colab.research.google.com/drive/1pYOYsCU59GBOwztkWCw5PTsqBiJbRy4S?usp=sharing). (We will soon add a more detailed step-by-step tutorial in Spanish for newcommers 😉) ## Acknowledgments We thank [Adereso](https://www.adere.so/) for kindly providing support for traininig BETO-uncased, and the [Millennium Institute for Foundational Research on Data](https://imfd.cl/en/) that provided support for training BETO-cased. Also thanks to Google for helping us with the [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) program. ## Citation [Spanish Pre-Trained BERT Model and Evaluation Data](https://users.dcc.uchile.cl/~jperez/papers/pml4dc2020.pdf) To cite this resource in a publication please use the following: ``` @inproceedings{CaneteCFP2020, title={Spanish Pre-Trained BERT Model and Evaluation Data}, author={Cañete, José and Chaperon, Gabriel and Fuentes, Rodrigo and Ho, Jou-Hui and Kang, Hojin and Pérez, Jorge}, booktitle={PML4DC at ICLR 2020}, year={2020} } ``` ## License Disclaimer The license CC BY 4.0 best describes our intentions for our work. However we are not sure that all the datasets used to train BETO have licenses compatible with CC BY 4.0 (specially for commercial use). Please use at your own discretion and verify that the licenses of the original text resources match your needs. ## References * [1] [Original Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) * [2] [Multilingual BERT on "Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT"](https://arxiv.org/pdf/1904.09077.pdf) * [3] [Multilingual BERT on "How Multilingual is Multilingual BERT?"](https://arxiv.org/pdf/1906.01502.pdf) * [4] [LASER](https://arxiv.org/abs/1812.10464) * [5] [XLM (MLM+TLM)](https://arxiv.org/pdf/1901.07291.pdf) * [6] [UDPipe on "75 Languages, 1 Model: Parsing Universal Dependencies Universally"](https://arxiv.org/pdf/1904.02099.pdf) * [7] [Multilingual BERT on "Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation"](https://arxiv.org/pdf/1906.01569.pdf) * [8] [Multilingual BERT on "PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification"](https://arxiv.org/abs/1908.11828)
{"language": ["es"], "tags": ["masked-lm"]}
dccuchile/bert-base-spanish-wwm-uncased
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "masked-lm", "es", "arxiv:1904.09077", "arxiv:1906.01502", "arxiv:1812.10464", "arxiv:1901.07291", "arxiv:1904.02099", "arxiv:1906.01569", "arxiv:1908.11828", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dchung117/dummy-model
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00