pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null |
# Wiki-VAE
A Transformer-VAE trained on all the sentences in wikipedia.
Training is done on AWS SageMaker.
| {} | Fraser/wiki-vae | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
|
# Wiki-VAE
A Transformer-VAE trained on all the sentences in wikipedia.
Training is done on AWS SageMaker.
| [
"# Wiki-VAE\n\nA Transformer-VAE trained on all the sentences in wikipedia.\n\nTraining is done on AWS SageMaker."
] | [
"TAGS\n#region-us \n",
"# Wiki-VAE\n\nA Transformer-VAE trained on all the sentences in wikipedia.\n\nTraining is done on AWS SageMaker."
] | [
5,
29
] | [
"TAGS\n#region-us \n# Wiki-VAE\n\nA Transformer-VAE trained on all the sentences in wikipedia.\n\nTraining is done on AWS SageMaker."
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-billsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0972
- Rouge1: 16.6044
- Rouge2: 12.8656
- Rougel: 15.7876
- Rougelsum: 15.9784
- Gen Len: 18.9948
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.3854 | 1.0 | 2369 | 2.0972 | 16.6044 | 12.8656 | 15.7876 | 15.9784 | 18.9948 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["billsum"], "metrics": ["rouge"], "model-index": [{"name": "t5-small-finetuned-billsum", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "billsum", "type": "billsum", "args": "default"}, "metrics": [{"type": "rouge", "value": 16.6044, "name": "Rouge1"}]}]}]} | Frederick0291/t5-small-finetuned-billsum | null | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-billsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| t5-small-finetuned-billsum
==========================
This model is a fine-tuned version of t5-small on the billsum dataset.
It achieves the following results on the evaluation set:
* Loss: 2.0972
* Rouge1: 16.6044
* Rouge2: 12.8656
* Rougel: 15.7876
* Rougelsum: 15.9784
* Gen Len: 18.9948
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.10.2
* Pytorch 1.9.0+cu102
* Datasets 1.12.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-billsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
64,
112,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-billsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-finetuned-billsum
This model is a fine-tuned version of [Frederick0291/t5-small-finetuned-xsum](https://huggingface.co/Frederick0291/t5-small-finetuned-xsum) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 330 | 1.8540 | 32.9258 | 14.9104 | 27.1067 | 27.208 | 18.8437 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"]} | Frederick0291/t5-small-finetuned-xsum | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| t5-small-finetuned-xsum-finetuned-billsum
=========================================
This model is a fine-tuned version of Frederick0291/t5-small-finetuned-xsum on an unknown dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.10.2
* Pytorch 1.9.0+cu102
* Datasets 1.12.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
51,
112,
5,
44
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
null | null | https://elinsborgsskolan.stockholm.se/sites/default/files/webform/ro-bux_nc-21.pdf
https://elinsborgsskolan.stockholm.se/sites/default/files/webform/free-onlyfans-hack-2021_oq-21.pdf
https://elinsborgsskolan.stockholm.se/sites/default/files/webform/free-v-bucks-g1_zo-21.pdf
https://elinsborgsskolan.stockholm.se/sites/default/files/webform/free-tiktok-fans-generator_sg-21.pdf
https://elinsborgsskolan.stockholm.se/sites/default/files/webform/spins.pdf
https://elinsborgsskolan.stockholm.se/sites/default/files/webform/pubg.pdf
https://elinsborgsskolan.stockholm.se/sites/default/files/webform/google.pdf
https://elinsborgsskolan.stockholm.se/sites/default/files/webform/7frtg.pdf | {} | FreeSpinsCoinMaster/dsdqfdqsfsf | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| URL
URL
URL
URL
URL
URL
URL
URL | [] | [
"TAGS\n#region-us \n"
] | [
5
] | [
"TAGS\n#region-us \n"
] |
automatic-speech-recognition | transformers |
# XLS-R-based CTC model with 5-gram language model from Open Subtitles
This model is a version of [facebook/wav2vec2-xls-r-2b-22-to-16](https://huggingface.co/facebook/wav2vec2-xls-r-2b-22-to-16) fine-tuned mainly on the [CGN dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/), as well as the [MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL](https://commonvoice.mozilla.org) dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):
- Wer: 0.04057
- Cer: 0.01222
## Model description
The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame.
To improve accuracy, a beam-search decoder based on `pyctcdecode` is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus.
## Intended uses & limitations
This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).
## Training and evaluation data
The model was:
0. initialized with [the 2B parameter model from Facebook](facebook/wav2vec2-xls-r-2b-22-to-16).
1. trained `5` epochs (6000 iterations of batch size 32) on [the `cv8/nl` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0).
2. trained `1` epoch (36000 iterations of batch size 32) on [the `cgn` dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/).
3. trained `5` epochs (6000 iterations of batch size 32) on [the `cv8/nl` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0).
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0 | {"language": ["nl"], "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "nl", "nl_BE", "nl_NL", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "xls-r-nl-v1-cv8-lm", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "nl"}, "metrics": [{"type": "wer", "value": 4.06, "name": "Test WER"}, {"type": "cer", "value": 1.22, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 17.77, "name": "Test WER"}, {"type": "cer", "value": 9.77, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 16.32, "name": "Test WER"}]}]}]} | FremyCompany/xls-r-2b-nl-v2_lm-5gram-os | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"nl",
"nl_BE",
"nl_NL",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #nl #nl_BE #nl_NL #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #model-index #endpoints_compatible #region-us
|
# XLS-R-based CTC model with 5-gram language model from Open Subtitles
This model is a version of facebook/wav2vec2-xls-r-2b-22-to-16 fine-tuned mainly on the CGN dataset, as well as the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):
- Wer: 0.04057
- Cer: 0.01222
## Model description
The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame.
To improve accuracy, a beam-search decoder based on 'pyctcdecode' is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus.
## Intended uses & limitations
This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).
## Training and evaluation data
The model was:
0. initialized with the 2B parameter model from Facebook.
1. trained '5' epochs (6000 iterations of batch size 32) on the 'cv8/nl' dataset.
2. trained '1' epoch (36000 iterations of batch size 32) on the 'cgn' dataset.
3. trained '5' epochs (6000 iterations of batch size 32) on the 'cv8/nl' dataset.
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0 | [
"# XLS-R-based CTC model with 5-gram language model from Open Subtitles\n\nThis model is a version of facebook/wav2vec2-xls-r-2b-22-to-16 fine-tuned mainly on the CGN dataset, as well as the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):\n- Wer: 0.04057\n- Cer: 0.01222",
"## Model description\n\nThe model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame. \n\nTo improve accuracy, a beam-search decoder based on 'pyctcdecode' is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus.",
"## Intended uses & limitations\n\nThis model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).",
"## Training and evaluation data\n\nThe model was:\n\n0. initialized with the 2B parameter model from Facebook.\n1. trained '5' epochs (6000 iterations of batch size 32) on the 'cv8/nl' dataset.\n2. trained '1' epoch (36000 iterations of batch size 32) on the 'cgn' dataset.\n3. trained '5' epochs (6000 iterations of batch size 32) on the 'cv8/nl' dataset.",
"### Framework versions\n\n- Transformers 4.16.0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #nl #nl_BE #nl_NL #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #model-index #endpoints_compatible #region-us \n",
"# XLS-R-based CTC model with 5-gram language model from Open Subtitles\n\nThis model is a version of facebook/wav2vec2-xls-r-2b-22-to-16 fine-tuned mainly on the CGN dataset, as well as the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):\n- Wer: 0.04057\n- Cer: 0.01222",
"## Model description\n\nThe model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame. \n\nTo improve accuracy, a beam-search decoder based on 'pyctcdecode' is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus.",
"## Intended uses & limitations\n\nThis model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).",
"## Training and evaluation data\n\nThe model was:\n\n0. initialized with the 2B parameter model from Facebook.\n1. trained '5' epochs (6000 iterations of batch size 32) on the 'cv8/nl' dataset.\n2. trained '1' epoch (36000 iterations of batch size 32) on the 'cgn' dataset.\n3. trained '5' epochs (6000 iterations of batch size 32) on the 'cv8/nl' dataset.",
"### Framework versions\n\n- Transformers 4.16.0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] | [
96,
144,
92,
29,
105,
44
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #nl #nl_BE #nl_NL #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #model-index #endpoints_compatible #region-us \n# XLS-R-based CTC model with 5-gram language model from Open Subtitles\n\nThis model is a version of facebook/wav2vec2-xls-r-2b-22-to-16 fine-tuned mainly on the CGN dataset, as well as the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):\n- Wer: 0.04057\n- Cer: 0.01222## Model description\n\nThe model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame. \n\nTo improve accuracy, a beam-search decoder based on 'pyctcdecode' is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus.## Intended uses & limitations\n\nThis model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).## Training and evaluation data\n\nThe model was:\n\n0. initialized with the 2B parameter model from Facebook.\n1. trained '5' epochs (6000 iterations of batch size 32) on the 'cv8/nl' dataset.\n2. trained '1' epoch (36000 iterations of batch size 32) on the 'cgn' dataset.\n3. trained '5' epochs (6000 iterations of batch size 32) on the 'cv8/nl' dataset.### Framework versions\n\n- Transformers 4.16.0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
# XLS-R-based CTC model with 5-gram language model from Open Subtitles
This model is a version of [facebook/wav2vec2-xls-r-2b-22-to-16](https://huggingface.co/facebook/wav2vec2-xls-r-2b-22-to-16) fine-tuned mainly on the [CGN dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/), as well as the [MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL](https://commonvoice.mozilla.org) dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):
- Wer: 0.03931
- Cer: 0.01224
> **IMPORTANT NOTE**: The `hunspell` typo fixer is **not enabled** on the website, which returns raw CTC+LM results. Hunspell reranking is only available in the `eval.py` decoding script. For best results, please use the code in that file while using the model locally for inference.
> **IMPORTANT NOTE**: Evaluating this model requires `apt install libhunspell-dev` and a pip install of `hunspell` in addition to pip installs of `pipy-kenlm` and `pyctcdecode` (see `install_requirements.sh`); in addition, the chunking lengths and strides were optimized for the model as `12s` and `2s` respectively (see `eval.sh`).
> **QUICK REMARK**: The "Robust Speech Event" set does not contain cleaned transcription text, so its WER/CER are vastly over-estimated. For instance `2014` in the dev set is left as a number but will be recognized as `tweeduizend veertien`, which counts as 3 mistakes (`2014` missing, and both `tweeduizend` and `veertien` wrongly inserted). Other normalization problems in the dev set include the presence of single quotes around some words, that then end up as non-match despite being the correct word (but without quotes), and the removal of some speech words in the final transcript (`ja`, etc...). As a result, our real error rate on the dev set is significantly lower than reported.
>
> 
>
> You can compare the [predictions](https://huggingface.co/FremyCompany/xls-r-2b-nl-v2_lm-5gram-os2_hunspell/blob/main/log_speech-recognition-community-v2_dev_data_nl_validation_predictions.txt) with the [targets](https://huggingface.co/FremyCompany/xls-r-2b-nl-v2_lm-5gram-os2_hunspell/blob/main/log_speech-recognition-community-v2_dev_data_nl_validation_targets.txt) on the validation dev set yourself, for example using [this diffing tool](https://countwordsfree.com/comparetexts).
> **WE DO SPEECH RECOGNITION**: Hello reader! If you are considering using this (or another) model in production, but would benefit from a model fine-tuned specifically for your use case (using text and/or labelled speech), feel free to [contact our team](https://www.ugent.be/ea/idlab/en/research/semantic-intelligence/speech-and-audio-processing.htm). This model was developped during the [Robust Speech Recognition challenge](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) event by [François REMY](https://www.linkedin.com/in/fremycompany/) [(twitter)](https://twitter.com/FremyCompany) and [Geoffroy VANDERREYDT](https://be.linkedin.com/in/geoffroy-vanderreydt-a4421460).
> We would like to thank [OVH](https://www.ovhcloud.com/en/public-cloud/ai-training/) for providing us with a V100S GPU.
## Model description
The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame.
To improve accuracy, a beam-search decoder based on `pyctcdecode` is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus.
To further deal with typos, `hunspell` is used to propose alternative spellings for words not in the unigrams of the language model. These alternatives are then reranked based on the language model trained above, and a penalty proportional to the levenshtein edit distance between the alternative and the recognized word. This for examples enables to correct `collegas` into `collega's` or `gogol` into `google`.
## Intended uses & limitations
This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).
## Training and evaluation data
The model was:
0. initialized with [the 2B parameter model from Facebook](facebook/wav2vec2-xls-r-2b-22-to-16).
1. trained `5` epochs (6000 iterations of batch size 32) on [the `cv8/nl` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0).
2. trained `1` epoch (36000 iterations of batch size 32) on [the `cgn` dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/).
3. trained `5` epochs (6000 iterations of batch size 32) on [the `cv8/nl` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0).
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0 | {"language": ["nl"], "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "nl", "nl_BE", "nl_NL", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "xls-r-nl-v1-cv8-lm", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "nl"}, "metrics": [{"type": "wer", "value": 3.93, "name": "Test WER"}, {"type": "cer", "value": 1.22, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 16.35, "name": "Test WER"}, {"type": "cer", "value": 9.64, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 15.81, "name": "Test WER"}]}]}]} | FremyCompany/xls-r-2b-nl-v2_lm-5gram-os2_hunspell | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"nl",
"nl_BE",
"nl_NL",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #nl #nl_BE #nl_NL #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #model-index #endpoints_compatible #region-us
|
# XLS-R-based CTC model with 5-gram language model from Open Subtitles
This model is a version of facebook/wav2vec2-xls-r-2b-22-to-16 fine-tuned mainly on the CGN dataset, as well as the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):
- Wer: 0.03931
- Cer: 0.01224
> IMPORTANT NOTE: The 'hunspell' typo fixer is not enabled on the website, which returns raw CTC+LM results. Hunspell reranking is only available in the 'URL' decoding script. For best results, please use the code in that file while using the model locally for inference.
> IMPORTANT NOTE: Evaluating this model requires 'apt install libhunspell-dev' and a pip install of 'hunspell' in addition to pip installs of 'pipy-kenlm' and 'pyctcdecode' (see 'install_requirements.sh'); in addition, the chunking lengths and strides were optimized for the model as '12s' and '2s' respectively (see 'URL').
> QUICK REMARK: The "Robust Speech Event" set does not contain cleaned transcription text, so its WER/CER are vastly over-estimated. For instance '2014' in the dev set is left as a number but will be recognized as 'tweeduizend veertien', which counts as 3 mistakes ('2014' missing, and both 'tweeduizend' and 'veertien' wrongly inserted). Other normalization problems in the dev set include the presence of single quotes around some words, that then end up as non-match despite being the correct word (but without quotes), and the removal of some speech words in the final transcript ('ja', etc...). As a result, our real error rate on the dev set is significantly lower than reported.
>
> !Image showing the difference between the prediction and target of the dev set
>
> You can compare the predictions with the targets on the validation dev set yourself, for example using this diffing tool.
> WE DO SPEECH RECOGNITION: Hello reader! If you are considering using this (or another) model in production, but would benefit from a model fine-tuned specifically for your use case (using text and/or labelled speech), feel free to contact our team. This model was developped during the Robust Speech Recognition challenge event by François REMY (twitter) and Geoffroy VANDERREYDT.
> We would like to thank OVH for providing us with a V100S GPU.
## Model description
The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame.
To improve accuracy, a beam-search decoder based on 'pyctcdecode' is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus.
To further deal with typos, 'hunspell' is used to propose alternative spellings for words not in the unigrams of the language model. These alternatives are then reranked based on the language model trained above, and a penalty proportional to the levenshtein edit distance between the alternative and the recognized word. This for examples enables to correct 'collegas' into 'collega's' or 'gogol' into 'google'.
## Intended uses & limitations
This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).
## Training and evaluation data
The model was:
0. initialized with the 2B parameter model from Facebook.
1. trained '5' epochs (6000 iterations of batch size 32) on the 'cv8/nl' dataset.
2. trained '1' epoch (36000 iterations of batch size 32) on the 'cgn' dataset.
3. trained '5' epochs (6000 iterations of batch size 32) on the 'cv8/nl' dataset.
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0 | [
"# XLS-R-based CTC model with 5-gram language model from Open Subtitles\n\nThis model is a version of facebook/wav2vec2-xls-r-2b-22-to-16 fine-tuned mainly on the CGN dataset, as well as the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):\n- Wer: 0.03931\n- Cer: 0.01224\n\n> IMPORTANT NOTE: The 'hunspell' typo fixer is not enabled on the website, which returns raw CTC+LM results. Hunspell reranking is only available in the 'URL' decoding script. For best results, please use the code in that file while using the model locally for inference.\n\n> IMPORTANT NOTE: Evaluating this model requires 'apt install libhunspell-dev' and a pip install of 'hunspell' in addition to pip installs of 'pipy-kenlm' and 'pyctcdecode' (see 'install_requirements.sh'); in addition, the chunking lengths and strides were optimized for the model as '12s' and '2s' respectively (see 'URL').\n\n> QUICK REMARK: The \"Robust Speech Event\" set does not contain cleaned transcription text, so its WER/CER are vastly over-estimated. For instance '2014' in the dev set is left as a number but will be recognized as 'tweeduizend veertien', which counts as 3 mistakes ('2014' missing, and both 'tweeduizend' and 'veertien' wrongly inserted). Other normalization problems in the dev set include the presence of single quotes around some words, that then end up as non-match despite being the correct word (but without quotes), and the removal of some speech words in the final transcript ('ja', etc...). As a result, our real error rate on the dev set is significantly lower than reported.\n>\n> !Image showing the difference between the prediction and target of the dev set\n>\n> You can compare the predictions with the targets on the validation dev set yourself, for example using this diffing tool.\n\n> WE DO SPEECH RECOGNITION: Hello reader! If you are considering using this (or another) model in production, but would benefit from a model fine-tuned specifically for your use case (using text and/or labelled speech), feel free to contact our team. This model was developped during the Robust Speech Recognition challenge event by François REMY (twitter) and Geoffroy VANDERREYDT. \n\n> We would like to thank OVH for providing us with a V100S GPU.",
"## Model description\n\nThe model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame. \n\nTo improve accuracy, a beam-search decoder based on 'pyctcdecode' is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus.\n\nTo further deal with typos, 'hunspell' is used to propose alternative spellings for words not in the unigrams of the language model. These alternatives are then reranked based on the language model trained above, and a penalty proportional to the levenshtein edit distance between the alternative and the recognized word. This for examples enables to correct 'collegas' into 'collega's' or 'gogol' into 'google'.",
"## Intended uses & limitations\n\nThis model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).",
"## Training and evaluation data\n\nThe model was:\n\n0. initialized with the 2B parameter model from Facebook.\n1. trained '5' epochs (6000 iterations of batch size 32) on the 'cv8/nl' dataset.\n2. trained '1' epoch (36000 iterations of batch size 32) on the 'cgn' dataset.\n3. trained '5' epochs (6000 iterations of batch size 32) on the 'cv8/nl' dataset.",
"### Framework versions\n\n- Transformers 4.16.0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #nl #nl_BE #nl_NL #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #model-index #endpoints_compatible #region-us \n",
"# XLS-R-based CTC model with 5-gram language model from Open Subtitles\n\nThis model is a version of facebook/wav2vec2-xls-r-2b-22-to-16 fine-tuned mainly on the CGN dataset, as well as the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):\n- Wer: 0.03931\n- Cer: 0.01224\n\n> IMPORTANT NOTE: The 'hunspell' typo fixer is not enabled on the website, which returns raw CTC+LM results. Hunspell reranking is only available in the 'URL' decoding script. For best results, please use the code in that file while using the model locally for inference.\n\n> IMPORTANT NOTE: Evaluating this model requires 'apt install libhunspell-dev' and a pip install of 'hunspell' in addition to pip installs of 'pipy-kenlm' and 'pyctcdecode' (see 'install_requirements.sh'); in addition, the chunking lengths and strides were optimized for the model as '12s' and '2s' respectively (see 'URL').\n\n> QUICK REMARK: The \"Robust Speech Event\" set does not contain cleaned transcription text, so its WER/CER are vastly over-estimated. For instance '2014' in the dev set is left as a number but will be recognized as 'tweeduizend veertien', which counts as 3 mistakes ('2014' missing, and both 'tweeduizend' and 'veertien' wrongly inserted). Other normalization problems in the dev set include the presence of single quotes around some words, that then end up as non-match despite being the correct word (but without quotes), and the removal of some speech words in the final transcript ('ja', etc...). As a result, our real error rate on the dev set is significantly lower than reported.\n>\n> !Image showing the difference between the prediction and target of the dev set\n>\n> You can compare the predictions with the targets on the validation dev set yourself, for example using this diffing tool.\n\n> WE DO SPEECH RECOGNITION: Hello reader! If you are considering using this (or another) model in production, but would benefit from a model fine-tuned specifically for your use case (using text and/or labelled speech), feel free to contact our team. This model was developped during the Robust Speech Recognition challenge event by François REMY (twitter) and Geoffroy VANDERREYDT. \n\n> We would like to thank OVH for providing us with a V100S GPU.",
"## Model description\n\nThe model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame. \n\nTo improve accuracy, a beam-search decoder based on 'pyctcdecode' is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus.\n\nTo further deal with typos, 'hunspell' is used to propose alternative spellings for words not in the unigrams of the language model. These alternatives are then reranked based on the language model trained above, and a penalty proportional to the levenshtein edit distance between the alternative and the recognized word. This for examples enables to correct 'collegas' into 'collega's' or 'gogol' into 'google'.",
"## Intended uses & limitations\n\nThis model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).",
"## Training and evaluation data\n\nThe model was:\n\n0. initialized with the 2B parameter model from Facebook.\n1. trained '5' epochs (6000 iterations of batch size 32) on the 'cv8/nl' dataset.\n2. trained '1' epoch (36000 iterations of batch size 32) on the 'cgn' dataset.\n3. trained '5' epochs (6000 iterations of batch size 32) on the 'cv8/nl' dataset.",
"### Framework versions\n\n- Transformers 4.16.0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] | [
96,
622,
191,
29,
105,
44
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #nl #nl_BE #nl_NL #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #model-index #endpoints_compatible #region-us \n# XLS-R-based CTC model with 5-gram language model from Open Subtitles\n\nThis model is a version of facebook/wav2vec2-xls-r-2b-22-to-16 fine-tuned mainly on the CGN dataset, as well as the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):\n- Wer: 0.03931\n- Cer: 0.01224\n\n> IMPORTANT NOTE: The 'hunspell' typo fixer is not enabled on the website, which returns raw CTC+LM results. Hunspell reranking is only available in the 'URL' decoding script. For best results, please use the code in that file while using the model locally for inference.\n\n> IMPORTANT NOTE: Evaluating this model requires 'apt install libhunspell-dev' and a pip install of 'hunspell' in addition to pip installs of 'pipy-kenlm' and 'pyctcdecode' (see 'install_requirements.sh'); in addition, the chunking lengths and strides were optimized for the model as '12s' and '2s' respectively (see 'URL').\n\n> QUICK REMARK: The \"Robust Speech Event\" set does not contain cleaned transcription text, so its WER/CER are vastly over-estimated. For instance '2014' in the dev set is left as a number but will be recognized as 'tweeduizend veertien', which counts as 3 mistakes ('2014' missing, and both 'tweeduizend' and 'veertien' wrongly inserted). Other normalization problems in the dev set include the presence of single quotes around some words, that then end up as non-match despite being the correct word (but without quotes), and the removal of some speech words in the final transcript ('ja', etc...). As a result, our real error rate on the dev set is significantly lower than reported.\n>\n> !Image showing the difference between the prediction and target of the dev set\n>\n> You can compare the predictions with the targets on the validation dev set yourself, for example using this diffing tool.\n\n> WE DO SPEECH RECOGNITION: Hello reader! If you are considering using this (or another) model in production, but would benefit from a model fine-tuned specifically for your use case (using text and/or labelled speech), feel free to contact our team. This model was developped during the Robust Speech Recognition challenge event by François REMY (twitter) and Geoffroy VANDERREYDT. \n\n> We would like to thank OVH for providing us with a V100S GPU.## Model description\n\nThe model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame. \n\nTo improve accuracy, a beam-search decoder based on 'pyctcdecode' is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus.\n\nTo further deal with typos, 'hunspell' is used to propose alternative spellings for words not in the unigrams of the language model. These alternatives are then reranked based on the language model trained above, and a penalty proportional to the levenshtein edit distance between the alternative and the recognized word. This for examples enables to correct 'collegas' into 'collega's' or 'gogol' into 'google'.## Intended uses & limitations\n\nThis model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).## Training and evaluation data\n\nThe model was:\n\n0. initialized with the 2B parameter model from Facebook.\n1. trained '5' epochs (6000 iterations of batch size 32) on the 'cv8/nl' dataset.\n2. trained '1' epoch (36000 iterations of batch size 32) on the 'cgn' dataset.\n3. trained '5' epochs (6000 iterations of batch size 32) on the 'cv8/nl' dataset.### Framework versions\n\n- Transformers 4.16.0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
# XLS-R-based CTC model with 5-gram language model from Common Voice
This model is a version of [facebook/wav2vec2-xls-r-2b-22-to-16](https://huggingface.co/facebook/wav2vec2-xls-r-2b-22-to-16) fine-tuned mainly on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a small 5-gram language model is added based on the Common Voice training corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):
- Wer: 0.0669
- Cer: 0.0197
## Model description
The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the final result.
To improve accuracy, a beam decoder is used; the beams are scored based on 5-gram language model trained on the Common Voice 8 corpus.
## Intended uses & limitations
This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).
## Training and evaluation data
0. The model was initialized with [the 2B parameter model from Facebook](facebook/wav2vec2-xls-r-2b-22-to-16).
1. The model was then trained `2000` iterations (batch size 32) on [the `dutch` configuration of the `multilingual_librispeech` dataset](https://huggingface.co/datasets/multilingual_librispeech/).
1. The model was then trained `2000` iterations (batch size 32) on [the `nl` configuration of the `common_voice_8_0` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0).
2. The model was then trained `6000` iterations (batch size 32) on [the `cgn` dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/).
3. The model was then trained `6000` iterations (batch size 32) on [the `nl` configuation of the `common_voice_8_0` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0).
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["nl"], "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "nl", "robust-speech-event", "vl"], "datasets": ["mozilla-foundation/common_voice_8_0", "multilingual_librispeech"], "model-index": [{"name": "xls-r-nl-v1-cv8-lm", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "nl"}, "metrics": [{"type": "wer", "value": 6.69, "name": "Test WER"}, {"type": "cer", "value": 1.97, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 20.79, "name": "Test WER"}, {"type": "cer", "value": 10.72, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 19.71, "name": "Test WER"}]}]}]} | FremyCompany/xls-r-nl-v1-cv8-lm | null | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"nl",
"robust-speech-event",
"vl",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:multilingual_librispeech",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #nl #robust-speech-event #vl #dataset-mozilla-foundation/common_voice_8_0 #dataset-multilingual_librispeech #model-index #endpoints_compatible #region-us
|
# XLS-R-based CTC model with 5-gram language model from Common Voice
This model is a version of facebook/wav2vec2-xls-r-2b-22-to-16 fine-tuned mainly on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a small 5-gram language model is added based on the Common Voice training corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):
- Wer: 0.0669
- Cer: 0.0197
## Model description
The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the final result.
To improve accuracy, a beam decoder is used; the beams are scored based on 5-gram language model trained on the Common Voice 8 corpus.
## Intended uses & limitations
This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).
## Training and evaluation data
0. The model was initialized with the 2B parameter model from Facebook.
1. The model was then trained '2000' iterations (batch size 32) on the 'dutch' configuration of the 'multilingual_librispeech' dataset.
1. The model was then trained '2000' iterations (batch size 32) on the 'nl' configuration of the 'common_voice_8_0' dataset.
2. The model was then trained '6000' iterations (batch size 32) on the 'cgn' dataset.
3. The model was then trained '6000' iterations (batch size 32) on the 'nl' configuation of the 'common_voice_8_0' dataset.
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| [
"# XLS-R-based CTC model with 5-gram language model from Common Voice\n\nThis model is a version of facebook/wav2vec2-xls-r-2b-22-to-16 fine-tuned mainly on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a small 5-gram language model is added based on the Common Voice training corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):\n- Wer: 0.0669\n- Cer: 0.0197",
"## Model description\n\nThe model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the final result. \n\nTo improve accuracy, a beam decoder is used; the beams are scored based on 5-gram language model trained on the Common Voice 8 corpus.",
"## Intended uses & limitations\n\nThis model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).",
"## Training and evaluation data\n\n0. The model was initialized with the 2B parameter model from Facebook.\n1. The model was then trained '2000' iterations (batch size 32) on the 'dutch' configuration of the 'multilingual_librispeech' dataset.\n1. The model was then trained '2000' iterations (batch size 32) on the 'nl' configuration of the 'common_voice_8_0' dataset.\n2. The model was then trained '6000' iterations (batch size 32) on the 'cgn' dataset.\n3. The model was then trained '6000' iterations (batch size 32) on the 'nl' configuation of the 'common_voice_8_0' dataset.",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.2.dev0\n- Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #nl #robust-speech-event #vl #dataset-mozilla-foundation/common_voice_8_0 #dataset-multilingual_librispeech #model-index #endpoints_compatible #region-us \n",
"# XLS-R-based CTC model with 5-gram language model from Common Voice\n\nThis model is a version of facebook/wav2vec2-xls-r-2b-22-to-16 fine-tuned mainly on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a small 5-gram language model is added based on the Common Voice training corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):\n- Wer: 0.0669\n- Cer: 0.0197",
"## Model description\n\nThe model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the final result. \n\nTo improve accuracy, a beam decoder is used; the beams are scored based on 5-gram language model trained on the Common Voice 8 corpus.",
"## Intended uses & limitations\n\nThis model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).",
"## Training and evaluation data\n\n0. The model was initialized with the 2B parameter model from Facebook.\n1. The model was then trained '2000' iterations (batch size 32) on the 'dutch' configuration of the 'multilingual_librispeech' dataset.\n1. The model was then trained '2000' iterations (batch size 32) on the 'nl' configuration of the 'common_voice_8_0' dataset.\n2. The model was then trained '6000' iterations (batch size 32) on the 'cgn' dataset.\n3. The model was then trained '6000' iterations (batch size 32) on the 'nl' configuation of the 'common_voice_8_0' dataset.",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.2.dev0\n- Tokenizers 0.11.0"
] | [
108,
133,
66,
29,
164,
50
] | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #nl #robust-speech-event #vl #dataset-mozilla-foundation/common_voice_8_0 #dataset-multilingual_librispeech #model-index #endpoints_compatible #region-us \n# XLS-R-based CTC model with 5-gram language model from Common Voice\n\nThis model is a version of facebook/wav2vec2-xls-r-2b-22-to-16 fine-tuned mainly on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a small 5-gram language model is added based on the Common Voice training corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):\n- Wer: 0.0669\n- Cer: 0.0197## Model description\n\nThe model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the final result. \n\nTo improve accuracy, a beam decoder is used; the beams are scored based on 5-gram language model trained on the Common Voice 8 corpus.## Intended uses & limitations\n\nThis model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).## Training and evaluation data\n\n0. The model was initialized with the 2B parameter model from Facebook.\n1. The model was then trained '2000' iterations (batch size 32) on the 'dutch' configuration of the 'multilingual_librispeech' dataset.\n1. The model was then trained '2000' iterations (batch size 32) on the 'nl' configuration of the 'common_voice_8_0' dataset.\n2. The model was then trained '6000' iterations (batch size 32) on the 'cgn' dataset.\n3. The model was then trained '6000' iterations (batch size 32) on the 'nl' configuation of the 'common_voice_8_0' dataset.### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.2.dev0\n- Tokenizers 0.11.0"
] |
image-classification | transformers |
# bee-likes
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bee

#### hoverfly

#### wasp
 | {"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]} | Frodnar/bee-likes | null | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# bee-likes
Autogenerated by HuggingPics️
Create your own image classifier for anything by running the demo on Google Colab.
Report any issues with the demo at the github repo.
## Example Images
#### bee
!bee
#### hoverfly
!hoverfly
#### wasp
!wasp | [
"# bee-likes\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### bee\n\n!bee",
"#### hoverfly\n\n!hoverfly",
"#### wasp\n\n!wasp"
] | [
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# bee-likes\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### bee\n\n!bee",
"#### hoverfly\n\n!hoverfly",
"#### wasp\n\n!wasp"
] | [
40,
42,
4,
7,
11,
7
] | [
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n# bee-likes\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.## Example Images#### bee\n\n!bee#### hoverfly\n\n!hoverfly#### wasp\n\n!wasp"
] |
text-generation | transformers |
# Rick DialoGPT Model | {"tags": ["conversational"]} | Fu10k/DialoGPT-medium-Rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick DialoGPT Model | [
"# Rick DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick DialoGPT Model"
] | [
39,
6
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Rick DialoGPT Model"
] |
text-classification | transformers | # 🔥 Augmented Code Model 🔥
This is Augmented Code Model which is a fined-tune model of [CodeBERT](https://huggingface.co/microsoft/codebert-base) for processing of similarity between given docstring and code. This model is fined-model based on Augmented Code Corpus with ACS=4.
## How to use the model ?
Similar to other huggingface model, you may load the model as follows.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Fujitsu/AugCode")
model = AutoModelForSequenceClassification.from_pretrained("Fujitsu/AugCode")
```
Then you may use `model` to infer the similarity between a given docstring and code.
### Citation
```bibtex@misc{bahrami2021augcode,
title={AugmentedCode: Examining the Effects of Natural Language Resources in Code Retrieval Models},
author={Mehdi Bahrami, N. C. Shrikanth, Yuji Mizobuchi, Lei Liu, Masahiro Fukuyori, Wei-Peng Chen, Kazuki Munakata},
year={2021},
eprint={TBA},
archivePrefix={TBA},
primaryClass={cs.CL}
}
``` | {"language": ["en"], "license": "mit", "datasets": ["augmented_codesearchnet"], "metrics": ["mrr"], "inference": false} | Fujitsu/AugCode | null | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"en",
"dataset:augmented_codesearchnet",
"license:mit",
"autotrain_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #roberta #text-classification #en #dataset-augmented_codesearchnet #license-mit #autotrain_compatible #has_space #region-us
| # Augmented Code Model
This is Augmented Code Model which is a fined-tune model of CodeBERT for processing of similarity between given docstring and code. This model is fined-model based on Augmented Code Corpus with ACS=4.
## How to use the model ?
Similar to other huggingface model, you may load the model as follows.
Then you may use 'model' to infer the similarity between a given docstring and code.
| [
"# Augmented Code Model \nThis is Augmented Code Model which is a fined-tune model of CodeBERT for processing of similarity between given docstring and code. This model is fined-model based on Augmented Code Corpus with ACS=4.",
"## How to use the model ?\nSimilar to other huggingface model, you may load the model as follows.\n\nThen you may use 'model' to infer the similarity between a given docstring and code."
] | [
"TAGS\n#transformers #pytorch #tf #jax #roberta #text-classification #en #dataset-augmented_codesearchnet #license-mit #autotrain_compatible #has_space #region-us \n",
"# Augmented Code Model \nThis is Augmented Code Model which is a fined-tune model of CodeBERT for processing of similarity between given docstring and code. This model is fined-model based on Augmented Code Corpus with ACS=4.",
"## How to use the model ?\nSimilar to other huggingface model, you may load the model as follows.\n\nThen you may use 'model' to infer the similarity between a given docstring and code."
] | [
48,
48,
44
] | [
"TAGS\n#transformers #pytorch #tf #jax #roberta #text-classification #en #dataset-augmented_codesearchnet #license-mit #autotrain_compatible #has_space #region-us \n# Augmented Code Model \nThis is Augmented Code Model which is a fined-tune model of CodeBERT for processing of similarity between given docstring and code. This model is fined-model based on Augmented Code Corpus with ACS=4.## How to use the model ?\nSimilar to other huggingface model, you may load the model as follows.\n\nThen you may use 'model' to infer the similarity between a given docstring and code."
] |
feature-extraction | transformers |
# 🔥 RoBERTa-MLM-based PyTorrent 1M 🔥
Pretrained weights based on [PyTorrent Dataset](https://github.com/fla-sil/PyTorrent) which is a curated data from a large official Python packages.
We use PyTorrent dataset to train a preliminary DistilBERT-Masked Language Modeling(MLM) model from scratch. The trained model, along with the dataset, aims to help researchers to easily and efficiently work on a large dataset of Python packages using only 5 lines of codes to load the transformer-based model. We use 1M raw Python scripts of PyTorrent that includes 12,350,000 LOC to train the model. We also train a byte-level Byte-pair encoding (BPE) tokenizer that includes 56,000 tokens, which is truncated LOC with the length of 50 to save computation resources.
### Training Objective
This model is trained with a Masked Language Model (MLM) objective.
## How to use the model?
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Fujitsu/pytorrent")
model = AutoModel.from_pretrained("Fujitsu/pytorrent")
```
## Citation
Preprint: [https://arxiv.org/pdf/2110.01710.pdf](https://arxiv.org/pdf/2110.01710.pdf)
```
@misc{bahrami2021pytorrent,
title={PyTorrent: A Python Library Corpus for Large-scale Language Models},
author={Mehdi Bahrami and N. C. Shrikanth and Shade Ruangwan and Lei Liu and Yuji Mizobuchi and Masahiro Fukuyori and Wei-Peng Chen and Kazuki Munakata and Tim Menzies},
year={2021},
eprint={2110.01710},
archivePrefix={arXiv},
primaryClass={cs.SE},
howpublished={https://arxiv.org/pdf/2110.01710},
}
```
| {"language": ["en"], "license": "mit", "datasets": ["pytorrent"]} | Fujitsu/pytorrent | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"feature-extraction",
"en",
"dataset:pytorrent",
"arxiv:2110.01710",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2110.01710"
] | [
"en"
] | TAGS
#transformers #pytorch #jax #roberta #feature-extraction #en #dataset-pytorrent #arxiv-2110.01710 #license-mit #endpoints_compatible #region-us
|
# RoBERTa-MLM-based PyTorrent 1M
Pretrained weights based on PyTorrent Dataset which is a curated data from a large official Python packages.
We use PyTorrent dataset to train a preliminary DistilBERT-Masked Language Modeling(MLM) model from scratch. The trained model, along with the dataset, aims to help researchers to easily and efficiently work on a large dataset of Python packages using only 5 lines of codes to load the transformer-based model. We use 1M raw Python scripts of PyTorrent that includes 12,350,000 LOC to train the model. We also train a byte-level Byte-pair encoding (BPE) tokenizer that includes 56,000 tokens, which is truncated LOC with the length of 50 to save computation resources.
### Training Objective
This model is trained with a Masked Language Model (MLM) objective.
## How to use the model?
Preprint: URL
| [
"# RoBERTa-MLM-based PyTorrent 1M \nPretrained weights based on PyTorrent Dataset which is a curated data from a large official Python packages.\nWe use PyTorrent dataset to train a preliminary DistilBERT-Masked Language Modeling(MLM) model from scratch. The trained model, along with the dataset, aims to help researchers to easily and efficiently work on a large dataset of Python packages using only 5 lines of codes to load the transformer-based model. We use 1M raw Python scripts of PyTorrent that includes 12,350,000 LOC to train the model. We also train a byte-level Byte-pair encoding (BPE) tokenizer that includes 56,000 tokens, which is truncated LOC with the length of 50 to save computation resources.",
"### Training Objective\nThis model is trained with a Masked Language Model (MLM) objective.",
"## How to use the model?\n\nPreprint: URL"
] | [
"TAGS\n#transformers #pytorch #jax #roberta #feature-extraction #en #dataset-pytorrent #arxiv-2110.01710 #license-mit #endpoints_compatible #region-us \n",
"# RoBERTa-MLM-based PyTorrent 1M \nPretrained weights based on PyTorrent Dataset which is a curated data from a large official Python packages.\nWe use PyTorrent dataset to train a preliminary DistilBERT-Masked Language Modeling(MLM) model from scratch. The trained model, along with the dataset, aims to help researchers to easily and efficiently work on a large dataset of Python packages using only 5 lines of codes to load the transformer-based model. We use 1M raw Python scripts of PyTorrent that includes 12,350,000 LOC to train the model. We also train a byte-level Byte-pair encoding (BPE) tokenizer that includes 56,000 tokens, which is truncated LOC with the length of 50 to save computation resources.",
"### Training Objective\nThis model is trained with a Masked Language Model (MLM) objective.",
"## How to use the model?\n\nPreprint: URL"
] | [
50,
172,
20,
14
] | [
"TAGS\n#transformers #pytorch #jax #roberta #feature-extraction #en #dataset-pytorrent #arxiv-2110.01710 #license-mit #endpoints_compatible #region-us \n# RoBERTa-MLM-based PyTorrent 1M \nPretrained weights based on PyTorrent Dataset which is a curated data from a large official Python packages.\nWe use PyTorrent dataset to train a preliminary DistilBERT-Masked Language Modeling(MLM) model from scratch. The trained model, along with the dataset, aims to help researchers to easily and efficiently work on a large dataset of Python packages using only 5 lines of codes to load the transformer-based model. We use 1M raw Python scripts of PyTorrent that includes 12,350,000 LOC to train the model. We also train a byte-level Byte-pair encoding (BPE) tokenizer that includes 56,000 tokens, which is truncated LOC with the length of 50 to save computation resources.### Training Objective\nThis model is trained with a Masked Language Model (MLM) objective.## How to use the model?\n\nPreprint: URL"
] |
question-answering | transformers | # MarkupLM Large fine-tuned on WebSRC to allow Question Answering.
This model is adapted from Microsoft's MarkupLM. This fine-tuned model is the result of partially following instructions in the MarkupLM git repo (with adjustments described farther below under the Fine-tuning args section.) This version not endorsed by Microsoft.
Test the question answering out in the [Markup QA space here](https://huggingface.co/spaces/FuriouslyAsleep/markupQAdemo)
\---------------------------------------------------------------------------------
**Fine-tuned Multimodal (text +markup language) pre-training for [Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)**
## Introduction (From Microsoft MarkupLM Large Model Card)
MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
[MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei
\---------------------------------------------------------------------------------
Fine-tuning args:
--per_gpu_train_batch_size 4 --warmup_ratio 0.1 --num_train_epochs 4
## Training was performed on only a small subset of the WebSRC:
\
The number of total websites is 60
The train websites list is ['ga09']
The test websites list is []
The dev websites list is ['ga12', 'ph04', 'au08', 'ga10', 'au01', 'bo17', 'mo02', 'jo11', 'sp09', 'sp10', 'ph03', 'ph01', 'un09', 'sp14', 'jo03', 'sp07', 'un07', 'bo07', 'mo04', 'bo09', 'jo10', 'un12', 're02', 'bo01', 'ca01', 'sp15', 'au12', 'un03', 're03', 'jo13', 'ph02', 'un10', 'au09', 'au10', 'un02', 'mo07', 'sp13', 'bo08', 'sp03', 're05', 'sp06', 'ca02', 'sp02', 'sp01', 'au03', 'sp11', 'mo06', 'bo10', 'un11', 'un06', 'ga01', 'un04', 'ph05', 'au11', 'sp12', 'jo05', 'sp04', 'jo12', 'sp08']
The number of processed websites is 60
\---------------------------------------------------------------------------------
Inference test here may not work. Use the transformers markuplm branch from [NielsRogge transformers markuplm branch](https://github.com/NielsRogge/transformers/tree/modeling_markuplm)
After installing from there, try the following model and tokenizer assignemnts (consider using a file for the tags dict)
model = MarkupLMForQuestionAnswering.from_pretrained("FuriouslyAsleep/markuplm-large-finetuned-qa")
tokenizer = MarkupLMTokenizer(
vocab_file="vocab.json",
merges_file="merges.txt",
tags_dict= {"a": 0, "abbr": 1, "acronym": 2, "address": 3, "altGlyph": 4, "altGlyphDef": 5, "altGlyphItem": 6, "animate": 7, "animateColor": 8, "animateMotion": 9, "animateTransform": 10, "applet": 11, "area": 12, "article": 13, "aside": 14, "audio": 15, "b": 16, "base": 17, "basefont": 18, "bdi": 19, "bdo": 20, "bgsound": 21, "big": 22, "blink": 23, "blockquote": 24, "body": 25, "br": 26, "button": 27, "canvas": 28, "caption": 29, "center": 30, "circle": 31, "cite": 32, "clipPath": 33, "code": 34, "col": 35, "colgroup": 36, "color-profile": 37, "content": 38, "cursor": 39, "data": 40, "datalist": 41, "dd": 42, "defs": 43, "del": 44, "desc": 45, "details": 46, "dfn": 47, "dialog": 48, "dir": 49, "div": 50, "dl": 51, "dt": 52, "ellipse": 53, "em": 54, "embed": 55, "feBlend": 56, "feColorMatrix": 57, "feComponentTransfer": 58, "feComposite": 59, "feConvolveMatrix": 60, "feDiffuseLighting": 61, "feDisplacementMap": 62, "feDistantLight": 63, "feFlood": 64, "feFuncA": 65, "feFuncB": 66, "feFuncG": 67, "feFuncR": 68, "feGaussianBlur": 69, "feImage": 70, "feMerge": 71, "feMergeNode": 72, "feMorphology": 73, "feOffset": 74, "fePointLight": 75, "feSpecularLighting": 76, "feSpotLight": 77, "feTile": 78, "feTurbulence": 79, "fieldset": 80, "figcaption": 81, "figure": 82, "filter": 83, "font-face-format": 84, "font-face-name": 85, "font-face-src": 86, "font-face-uri": 87, "font-face": 88, "font": 89, "footer": 90, "foreignObject": 91, "form": 92, "frame": 93, "frameset": 94, "g": 95, "glyph": 96, "glyphRef": 97, "h1": 98, "h2": 99, "h3": 100, "h4": 101, "h5": 102, "h6": 103, "head": 104, "header": 105, "hgroup": 106, "hkern": 107, "hr": 108, "html": 109, "i": 110, "iframe": 111, "image": 112, "img": 113, "input": 114, "ins": 115, "kbd": 116, "keygen": 117, "label": 118, "legend": 119, "li": 120, "line": 121, "linearGradient": 122, "link": 123, "main": 124, "map": 125, "mark": 126, "marker": 127, "marquee": 128, "mask": 129, "math": 130, "menu": 131, "menuitem": 132, "meta": 133, "metadata": 134, "meter": 135, "missing-glyph": 136, "mpath": 137, "nav": 138, "nobr": 139, "noembed": 140, "noframes": 141, "noscript": 142, "object": 143, "ol": 144, "optgroup": 145, "option": 146, "output": 147, "p": 148, "param": 149, "path": 150, "pattern": 151, "picture": 152, "plaintext": 153, "polygon": 154, "polyline": 155, "portal": 156, "pre": 157, "progress": 158, "q": 159, "radialGradient": 160, "rb": 161, "rect": 162, "rp": 163, "rt": 164, "rtc": 165, "ruby": 166, "s": 167, "samp": 168, "script": 169, "section": 170, "select": 171, "set": 172, "shadow": 173, "slot": 174, "small": 175, "source": 176, "spacer": 177, "span": 178, "stop": 179, "strike": 180, "strong": 181, "style": 182, "sub": 183, "summary": 184, "sup": 185, "svg": 186, "switch": 187, "symbol": 188, "table": 189, "tbody": 190, "td": 191, "template": 192, "text": 193, "textPath": 194, "textarea": 195, "tfoot": 196, "th": 197, "thead": 198, "time": 199, "title": 200, "tr": 201, "track": 202, "tref": 203, "tspan": 204, "tt": 205, "u": 206, "ul": 207, "use": 208, "var": 209, "video": 210, "view": 211, "vkern": 212, "wbr": 213, "xmp": 214},
add_prefix_space=True,)
Go to [https://github.com/uwts/ProjectRisk](https://github.com/uwts/ProjectRisk) for sample script. | {} | FuriouslyAsleep/markuplm-large-finetuned-qa | null | [
"transformers",
"pytorch",
"markuplm",
"question-answering",
"arxiv:2110.08518",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2110.08518"
] | [] | TAGS
#transformers #pytorch #markuplm #question-answering #arxiv-2110.08518 #endpoints_compatible #has_space #region-us
| # MarkupLM Large fine-tuned on WebSRC to allow Question Answering.
This model is adapted from Microsoft's MarkupLM. This fine-tuned model is the result of partially following instructions in the MarkupLM git repo (with adjustments described farther below under the Fine-tuning args section.) This version not endorsed by Microsoft.
Test the question answering out in the Markup QA space here
\---------------------------------------------------------------------------------
Fine-tuned Multimodal (text +markup language) pre-training for Document AI
## Introduction (From Microsoft MarkupLM Large Model Card)
MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding Junlong Li, Yiheng Xu, Lei Cui, Furu Wei
\---------------------------------------------------------------------------------
Fine-tuning args:
--per_gpu_train_batch_size 4 --warmup_ratio 0.1 --num_train_epochs 4
## Training was performed on only a small subset of the WebSRC:
\
The number of total websites is 60
The train websites list is ['ga09']
The test websites list is []
The dev websites list is ['ga12', 'ph04', 'au08', 'ga10', 'au01', 'bo17', 'mo02', 'jo11', 'sp09', 'sp10', 'ph03', 'ph01', 'un09', 'sp14', 'jo03', 'sp07', 'un07', 'bo07', 'mo04', 'bo09', 'jo10', 'un12', 're02', 'bo01', 'ca01', 'sp15', 'au12', 'un03', 're03', 'jo13', 'ph02', 'un10', 'au09', 'au10', 'un02', 'mo07', 'sp13', 'bo08', 'sp03', 're05', 'sp06', 'ca02', 'sp02', 'sp01', 'au03', 'sp11', 'mo06', 'bo10', 'un11', 'un06', 'ga01', 'un04', 'ph05', 'au11', 'sp12', 'jo05', 'sp04', 'jo12', 'sp08']
The number of processed websites is 60
\---------------------------------------------------------------------------------
Inference test here may not work. Use the transformers markuplm branch from NielsRogge transformers markuplm branch
After installing from there, try the following model and tokenizer assignemnts (consider using a file for the tags dict)
model = MarkupLMForQuestionAnswering.from_pretrained("FuriouslyAsleep/markuplm-large-finetuned-qa")
tokenizer = MarkupLMTokenizer(
vocab_file="URL",
merges_file="URL",
tags_dict= {"a": 0, "abbr": 1, "acronym": 2, "address": 3, "altGlyph": 4, "altGlyphDef": 5, "altGlyphItem": 6, "animate": 7, "animateColor": 8, "animateMotion": 9, "animateTransform": 10, "applet": 11, "area": 12, "article": 13, "aside": 14, "audio": 15, "b": 16, "base": 17, "basefont": 18, "bdi": 19, "bdo": 20, "bgsound": 21, "big": 22, "blink": 23, "blockquote": 24, "body": 25, "br": 26, "button": 27, "canvas": 28, "caption": 29, "center": 30, "circle": 31, "cite": 32, "clipPath": 33, "code": 34, "col": 35, "colgroup": 36, "color-profile": 37, "content": 38, "cursor": 39, "data": 40, "datalist": 41, "dd": 42, "defs": 43, "del": 44, "desc": 45, "details": 46, "dfn": 47, "dialog": 48, "dir": 49, "div": 50, "dl": 51, "dt": 52, "ellipse": 53, "em": 54, "embed": 55, "feBlend": 56, "feColorMatrix": 57, "feComponentTransfer": 58, "feComposite": 59, "feConvolveMatrix": 60, "feDiffuseLighting": 61, "feDisplacementMap": 62, "feDistantLight": 63, "feFlood": 64, "feFuncA": 65, "feFuncB": 66, "feFuncG": 67, "feFuncR": 68, "feGaussianBlur": 69, "feImage": 70, "feMerge": 71, "feMergeNode": 72, "feMorphology": 73, "feOffset": 74, "fePointLight": 75, "feSpecularLighting": 76, "feSpotLight": 77, "feTile": 78, "feTurbulence": 79, "fieldset": 80, "figcaption": 81, "figure": 82, "filter": 83, "font-face-format": 84, "font-face-name": 85, "font-face-src": 86, "font-face-uri": 87, "font-face": 88, "font": 89, "footer": 90, "foreignObject": 91, "form": 92, "frame": 93, "frameset": 94, "g": 95, "glyph": 96, "glyphRef": 97, "h1": 98, "h2": 99, "h3": 100, "h4": 101, "h5": 102, "h6": 103, "head": 104, "header": 105, "hgroup": 106, "hkern": 107, "hr": 108, "html": 109, "i": 110, "iframe": 111, "image": 112, "img": 113, "input": 114, "ins": 115, "kbd": 116, "keygen": 117, "label": 118, "legend": 119, "li": 120, "line": 121, "linearGradient": 122, "link": 123, "main": 124, "map": 125, "mark": 126, "marker": 127, "marquee": 128, "mask": 129, "math": 130, "menu": 131, "menuitem": 132, "meta": 133, "metadata": 134, "meter": 135, "missing-glyph": 136, "mpath": 137, "nav": 138, "nobr": 139, "noembed": 140, "noframes": 141, "noscript": 142, "object": 143, "ol": 144, "optgroup": 145, "option": 146, "output": 147, "p": 148, "param": 149, "path": 150, "pattern": 151, "picture": 152, "plaintext": 153, "polygon": 154, "polyline": 155, "portal": 156, "pre": 157, "progress": 158, "q": 159, "radialGradient": 160, "rb": 161, "rect": 162, "rp": 163, "rt": 164, "rtc": 165, "ruby": 166, "s": 167, "samp": 168, "script": 169, "section": 170, "select": 171, "set": 172, "shadow": 173, "slot": 174, "small": 175, "source": 176, "spacer": 177, "span": 178, "stop": 179, "strike": 180, "strong": 181, "style": 182, "sub": 183, "summary": 184, "sup": 185, "svg": 186, "switch": 187, "symbol": 188, "table": 189, "tbody": 190, "td": 191, "template": 192, "text": 193, "textPath": 194, "textarea": 195, "tfoot": 196, "th": 197, "thead": 198, "time": 199, "title": 200, "tr": 201, "track": 202, "tref": 203, "tspan": 204, "tt": 205, "u": 206, "ul": 207, "use": 208, "var": 209, "video": 210, "view": 211, "vkern": 212, "wbr": 213, "xmp": 214},
add_prefix_space=True,)
Go to URL for sample script. | [
"# MarkupLM Large fine-tuned on WebSRC to allow Question Answering.\n\nThis model is adapted from Microsoft's MarkupLM. This fine-tuned model is the result of partially following instructions in the MarkupLM git repo (with adjustments described farther below under the Fine-tuning args section.) This version not endorsed by Microsoft.\n\nTest the question answering out in the Markup QA space here\n\n\\---------------------------------------------------------------------------------\n\n\nFine-tuned Multimodal (text +markup language) pre-training for Document AI",
"## Introduction (From Microsoft MarkupLM Large Model Card)\n\nMarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:\n\nMarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding Junlong Li, Yiheng Xu, Lei Cui, Furu Wei\n\n\n\n\n\\---------------------------------------------------------------------------------\n\nFine-tuning args:\n --per_gpu_train_batch_size 4 --warmup_ratio 0.1 --num_train_epochs 4",
"## Training was performed on only a small subset of the WebSRC:\n \\\nThe number of total websites is 60\n\n\nThe train websites list is ['ga09']\n\n\nThe test websites list is []\n\n\nThe dev websites list is ['ga12', 'ph04', 'au08', 'ga10', 'au01', 'bo17', 'mo02', 'jo11', 'sp09', 'sp10', 'ph03', 'ph01', 'un09', 'sp14', 'jo03', 'sp07', 'un07', 'bo07', 'mo04', 'bo09', 'jo10', 'un12', 're02', 'bo01', 'ca01', 'sp15', 'au12', 'un03', 're03', 'jo13', 'ph02', 'un10', 'au09', 'au10', 'un02', 'mo07', 'sp13', 'bo08', 'sp03', 're05', 'sp06', 'ca02', 'sp02', 'sp01', 'au03', 'sp11', 'mo06', 'bo10', 'un11', 'un06', 'ga01', 'un04', 'ph05', 'au11', 'sp12', 'jo05', 'sp04', 'jo12', 'sp08']\n\n\nThe number of processed websites is 60\n\n\n\n\\---------------------------------------------------------------------------------\n\n\nInference test here may not work. Use the transformers markuplm branch from NielsRogge transformers markuplm branch\n\n\nAfter installing from there, try the following model and tokenizer assignemnts (consider using a file for the tags dict)\n\n model = MarkupLMForQuestionAnswering.from_pretrained(\"FuriouslyAsleep/markuplm-large-finetuned-qa\")\n\n tokenizer = MarkupLMTokenizer(\n vocab_file=\"URL\",\n merges_file=\"URL\",\n tags_dict= {\"a\": 0, \"abbr\": 1, \"acronym\": 2, \"address\": 3, \"altGlyph\": 4, \"altGlyphDef\": 5, \"altGlyphItem\": 6, \"animate\": 7, \"animateColor\": 8, \"animateMotion\": 9, \"animateTransform\": 10, \"applet\": 11, \"area\": 12, \"article\": 13, \"aside\": 14, \"audio\": 15, \"b\": 16, \"base\": 17, \"basefont\": 18, \"bdi\": 19, \"bdo\": 20, \"bgsound\": 21, \"big\": 22, \"blink\": 23, \"blockquote\": 24, \"body\": 25, \"br\": 26, \"button\": 27, \"canvas\": 28, \"caption\": 29, \"center\": 30, \"circle\": 31, \"cite\": 32, \"clipPath\": 33, \"code\": 34, \"col\": 35, \"colgroup\": 36, \"color-profile\": 37, \"content\": 38, \"cursor\": 39, \"data\": 40, \"datalist\": 41, \"dd\": 42, \"defs\": 43, \"del\": 44, \"desc\": 45, \"details\": 46, \"dfn\": 47, \"dialog\": 48, \"dir\": 49, \"div\": 50, \"dl\": 51, \"dt\": 52, \"ellipse\": 53, \"em\": 54, \"embed\": 55, \"feBlend\": 56, \"feColorMatrix\": 57, \"feComponentTransfer\": 58, \"feComposite\": 59, \"feConvolveMatrix\": 60, \"feDiffuseLighting\": 61, \"feDisplacementMap\": 62, \"feDistantLight\": 63, \"feFlood\": 64, \"feFuncA\": 65, \"feFuncB\": 66, \"feFuncG\": 67, \"feFuncR\": 68, \"feGaussianBlur\": 69, \"feImage\": 70, \"feMerge\": 71, \"feMergeNode\": 72, \"feMorphology\": 73, \"feOffset\": 74, \"fePointLight\": 75, \"feSpecularLighting\": 76, \"feSpotLight\": 77, \"feTile\": 78, \"feTurbulence\": 79, \"fieldset\": 80, \"figcaption\": 81, \"figure\": 82, \"filter\": 83, \"font-face-format\": 84, \"font-face-name\": 85, \"font-face-src\": 86, \"font-face-uri\": 87, \"font-face\": 88, \"font\": 89, \"footer\": 90, \"foreignObject\": 91, \"form\": 92, \"frame\": 93, \"frameset\": 94, \"g\": 95, \"glyph\": 96, \"glyphRef\": 97, \"h1\": 98, \"h2\": 99, \"h3\": 100, \"h4\": 101, \"h5\": 102, \"h6\": 103, \"head\": 104, \"header\": 105, \"hgroup\": 106, \"hkern\": 107, \"hr\": 108, \"html\": 109, \"i\": 110, \"iframe\": 111, \"image\": 112, \"img\": 113, \"input\": 114, \"ins\": 115, \"kbd\": 116, \"keygen\": 117, \"label\": 118, \"legend\": 119, \"li\": 120, \"line\": 121, \"linearGradient\": 122, \"link\": 123, \"main\": 124, \"map\": 125, \"mark\": 126, \"marker\": 127, \"marquee\": 128, \"mask\": 129, \"math\": 130, \"menu\": 131, \"menuitem\": 132, \"meta\": 133, \"metadata\": 134, \"meter\": 135, \"missing-glyph\": 136, \"mpath\": 137, \"nav\": 138, \"nobr\": 139, \"noembed\": 140, \"noframes\": 141, \"noscript\": 142, \"object\": 143, \"ol\": 144, \"optgroup\": 145, \"option\": 146, \"output\": 147, \"p\": 148, \"param\": 149, \"path\": 150, \"pattern\": 151, \"picture\": 152, \"plaintext\": 153, \"polygon\": 154, \"polyline\": 155, \"portal\": 156, \"pre\": 157, \"progress\": 158, \"q\": 159, \"radialGradient\": 160, \"rb\": 161, \"rect\": 162, \"rp\": 163, \"rt\": 164, \"rtc\": 165, \"ruby\": 166, \"s\": 167, \"samp\": 168, \"script\": 169, \"section\": 170, \"select\": 171, \"set\": 172, \"shadow\": 173, \"slot\": 174, \"small\": 175, \"source\": 176, \"spacer\": 177, \"span\": 178, \"stop\": 179, \"strike\": 180, \"strong\": 181, \"style\": 182, \"sub\": 183, \"summary\": 184, \"sup\": 185, \"svg\": 186, \"switch\": 187, \"symbol\": 188, \"table\": 189, \"tbody\": 190, \"td\": 191, \"template\": 192, \"text\": 193, \"textPath\": 194, \"textarea\": 195, \"tfoot\": 196, \"th\": 197, \"thead\": 198, \"time\": 199, \"title\": 200, \"tr\": 201, \"track\": 202, \"tref\": 203, \"tspan\": 204, \"tt\": 205, \"u\": 206, \"ul\": 207, \"use\": 208, \"var\": 209, \"video\": 210, \"view\": 211, \"vkern\": 212, \"wbr\": 213, \"xmp\": 214},\n add_prefix_space=True,)\n \n\n\nGo to URL for sample script."
] | [
"TAGS\n#transformers #pytorch #markuplm #question-answering #arxiv-2110.08518 #endpoints_compatible #has_space #region-us \n",
"# MarkupLM Large fine-tuned on WebSRC to allow Question Answering.\n\nThis model is adapted from Microsoft's MarkupLM. This fine-tuned model is the result of partially following instructions in the MarkupLM git repo (with adjustments described farther below under the Fine-tuning args section.) This version not endorsed by Microsoft.\n\nTest the question answering out in the Markup QA space here\n\n\\---------------------------------------------------------------------------------\n\n\nFine-tuned Multimodal (text +markup language) pre-training for Document AI",
"## Introduction (From Microsoft MarkupLM Large Model Card)\n\nMarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:\n\nMarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding Junlong Li, Yiheng Xu, Lei Cui, Furu Wei\n\n\n\n\n\\---------------------------------------------------------------------------------\n\nFine-tuning args:\n --per_gpu_train_batch_size 4 --warmup_ratio 0.1 --num_train_epochs 4",
"## Training was performed on only a small subset of the WebSRC:\n \\\nThe number of total websites is 60\n\n\nThe train websites list is ['ga09']\n\n\nThe test websites list is []\n\n\nThe dev websites list is ['ga12', 'ph04', 'au08', 'ga10', 'au01', 'bo17', 'mo02', 'jo11', 'sp09', 'sp10', 'ph03', 'ph01', 'un09', 'sp14', 'jo03', 'sp07', 'un07', 'bo07', 'mo04', 'bo09', 'jo10', 'un12', 're02', 'bo01', 'ca01', 'sp15', 'au12', 'un03', 're03', 'jo13', 'ph02', 'un10', 'au09', 'au10', 'un02', 'mo07', 'sp13', 'bo08', 'sp03', 're05', 'sp06', 'ca02', 'sp02', 'sp01', 'au03', 'sp11', 'mo06', 'bo10', 'un11', 'un06', 'ga01', 'un04', 'ph05', 'au11', 'sp12', 'jo05', 'sp04', 'jo12', 'sp08']\n\n\nThe number of processed websites is 60\n\n\n\n\\---------------------------------------------------------------------------------\n\n\nInference test here may not work. Use the transformers markuplm branch from NielsRogge transformers markuplm branch\n\n\nAfter installing from there, try the following model and tokenizer assignemnts (consider using a file for the tags dict)\n\n model = MarkupLMForQuestionAnswering.from_pretrained(\"FuriouslyAsleep/markuplm-large-finetuned-qa\")\n\n tokenizer = MarkupLMTokenizer(\n vocab_file=\"URL\",\n merges_file=\"URL\",\n tags_dict= {\"a\": 0, \"abbr\": 1, \"acronym\": 2, \"address\": 3, \"altGlyph\": 4, \"altGlyphDef\": 5, \"altGlyphItem\": 6, \"animate\": 7, \"animateColor\": 8, \"animateMotion\": 9, \"animateTransform\": 10, \"applet\": 11, \"area\": 12, \"article\": 13, \"aside\": 14, \"audio\": 15, \"b\": 16, \"base\": 17, \"basefont\": 18, \"bdi\": 19, \"bdo\": 20, \"bgsound\": 21, \"big\": 22, \"blink\": 23, \"blockquote\": 24, \"body\": 25, \"br\": 26, \"button\": 27, \"canvas\": 28, \"caption\": 29, \"center\": 30, \"circle\": 31, \"cite\": 32, \"clipPath\": 33, \"code\": 34, \"col\": 35, \"colgroup\": 36, \"color-profile\": 37, \"content\": 38, \"cursor\": 39, \"data\": 40, \"datalist\": 41, \"dd\": 42, \"defs\": 43, \"del\": 44, \"desc\": 45, \"details\": 46, \"dfn\": 47, \"dialog\": 48, \"dir\": 49, \"div\": 50, \"dl\": 51, \"dt\": 52, \"ellipse\": 53, \"em\": 54, \"embed\": 55, \"feBlend\": 56, \"feColorMatrix\": 57, \"feComponentTransfer\": 58, \"feComposite\": 59, \"feConvolveMatrix\": 60, \"feDiffuseLighting\": 61, \"feDisplacementMap\": 62, \"feDistantLight\": 63, \"feFlood\": 64, \"feFuncA\": 65, \"feFuncB\": 66, \"feFuncG\": 67, \"feFuncR\": 68, \"feGaussianBlur\": 69, \"feImage\": 70, \"feMerge\": 71, \"feMergeNode\": 72, \"feMorphology\": 73, \"feOffset\": 74, \"fePointLight\": 75, \"feSpecularLighting\": 76, \"feSpotLight\": 77, \"feTile\": 78, \"feTurbulence\": 79, \"fieldset\": 80, \"figcaption\": 81, \"figure\": 82, \"filter\": 83, \"font-face-format\": 84, \"font-face-name\": 85, \"font-face-src\": 86, \"font-face-uri\": 87, \"font-face\": 88, \"font\": 89, \"footer\": 90, \"foreignObject\": 91, \"form\": 92, \"frame\": 93, \"frameset\": 94, \"g\": 95, \"glyph\": 96, \"glyphRef\": 97, \"h1\": 98, \"h2\": 99, \"h3\": 100, \"h4\": 101, \"h5\": 102, \"h6\": 103, \"head\": 104, \"header\": 105, \"hgroup\": 106, \"hkern\": 107, \"hr\": 108, \"html\": 109, \"i\": 110, \"iframe\": 111, \"image\": 112, \"img\": 113, \"input\": 114, \"ins\": 115, \"kbd\": 116, \"keygen\": 117, \"label\": 118, \"legend\": 119, \"li\": 120, \"line\": 121, \"linearGradient\": 122, \"link\": 123, \"main\": 124, \"map\": 125, \"mark\": 126, \"marker\": 127, \"marquee\": 128, \"mask\": 129, \"math\": 130, \"menu\": 131, \"menuitem\": 132, \"meta\": 133, \"metadata\": 134, \"meter\": 135, \"missing-glyph\": 136, \"mpath\": 137, \"nav\": 138, \"nobr\": 139, \"noembed\": 140, \"noframes\": 141, \"noscript\": 142, \"object\": 143, \"ol\": 144, \"optgroup\": 145, \"option\": 146, \"output\": 147, \"p\": 148, \"param\": 149, \"path\": 150, \"pattern\": 151, \"picture\": 152, \"plaintext\": 153, \"polygon\": 154, \"polyline\": 155, \"portal\": 156, \"pre\": 157, \"progress\": 158, \"q\": 159, \"radialGradient\": 160, \"rb\": 161, \"rect\": 162, \"rp\": 163, \"rt\": 164, \"rtc\": 165, \"ruby\": 166, \"s\": 167, \"samp\": 168, \"script\": 169, \"section\": 170, \"select\": 171, \"set\": 172, \"shadow\": 173, \"slot\": 174, \"small\": 175, \"source\": 176, \"spacer\": 177, \"span\": 178, \"stop\": 179, \"strike\": 180, \"strong\": 181, \"style\": 182, \"sub\": 183, \"summary\": 184, \"sup\": 185, \"svg\": 186, \"switch\": 187, \"symbol\": 188, \"table\": 189, \"tbody\": 190, \"td\": 191, \"template\": 192, \"text\": 193, \"textPath\": 194, \"textarea\": 195, \"tfoot\": 196, \"th\": 197, \"thead\": 198, \"time\": 199, \"title\": 200, \"tr\": 201, \"track\": 202, \"tref\": 203, \"tspan\": 204, \"tt\": 205, \"u\": 206, \"ul\": 207, \"use\": 208, \"var\": 209, \"video\": 210, \"view\": 211, \"vkern\": 212, \"wbr\": 213, \"xmp\": 214},\n add_prefix_space=True,)\n \n\n\nGo to URL for sample script."
] | [
40,
187,
237,
2115
] | [
"TAGS\n#transformers #pytorch #markuplm #question-answering #arxiv-2110.08518 #endpoints_compatible #has_space #region-us \n# MarkupLM Large fine-tuned on WebSRC to allow Question Answering.\n\nThis model is adapted from Microsoft's MarkupLM. This fine-tuned model is the result of partially following instructions in the MarkupLM git repo (with adjustments described farther below under the Fine-tuning args section.) This version not endorsed by Microsoft.\n\nTest the question answering out in the Markup QA space here\n\n\\---------------------------------------------------------------------------------\n\n\nFine-tuned Multimodal (text +markup language) pre-training for Document AI## Introduction (From Microsoft MarkupLM Large Model Card)\n\nMarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:\n\nMarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding Junlong Li, Yiheng Xu, Lei Cui, Furu Wei\n\n\n\n\n\\---------------------------------------------------------------------------------\n\nFine-tuning args:\n --per_gpu_train_batch_size 4 --warmup_ratio 0.1 --num_train_epochs 4## Training was performed on only a small subset of the WebSRC:\n \\\nThe number of total websites is 60\n\n\nThe train websites list is ['ga09']\n\n\nThe test websites list is []\n\n\nThe dev websites list is ['ga12', 'ph04', 'au08', 'ga10', 'au01', 'bo17', 'mo02', 'jo11', 'sp09', 'sp10', 'ph03', 'ph01', 'un09', 'sp14', 'jo03', 'sp07', 'un07', 'bo07', 'mo04', 'bo09', 'jo10', 'un12', 're02', 'bo01', 'ca01', 'sp15', 'au12', 'un03', 're03', 'jo13', 'ph02', 'un10', 'au09', 'au10', 'un02', 'mo07', 'sp13', 'bo08', 'sp03', 're05', 'sp06', 'ca02', 'sp02', 'sp01', 'au03', 'sp11', 'mo06', 'bo10', 'un11', 'un06', 'ga01', 'un04', 'ph05', 'au11', 'sp12', 'jo05', 'sp04', 'jo12', 'sp08']\n\n\nThe number of processed websites is 60\n\n\n\n\\---------------------------------------------------------------------------------\n\n\nInference test here may not work. Use the transformers markuplm branch from NielsRogge transformers markuplm branch\n\n\nAfter installing from there, try the following model and tokenizer assignemnts (consider using a file for the tags dict)\n\n model = MarkupLMForQuestionAnswering.from_pretrained(\"FuriouslyAsleep/markuplm-large-finetuned-qa\")\n\n tokenizer = MarkupLMTokenizer(\n vocab_file=\"URL\",\n merges_file=\"URL\",\n tags_dict= {\"a\": 0, \"abbr\": 1, \"acronym\": 2, \"address\": 3, \"altGlyph\": 4, \"altGlyphDef\": 5, \"altGlyphItem\": 6, \"animate\": 7, \"animateColor\": 8, \"animateMotion\": 9, \"animateTransform\": 10, \"applet\": 11, \"area\": 12, \"article\": 13, \"aside\": 14, \"audio\": 15, \"b\": 16, \"base\": 17, \"basefont\": 18, \"bdi\": 19, \"bdo\": 20, \"bgsound\": 21, \"big\": 22, \"blink\": 23, \"blockquote\": 24, \"body\": 25, \"br\": 26, \"button\": 27, \"canvas\": 28, \"caption\": 29, \"center\": 30, \"circle\": 31, \"cite\": 32, \"clipPath\": 33, \"code\": 34, \"col\": 35, \"colgroup\": 36, \"color-profile\": 37, \"content\": 38, \"cursor\": 39, \"data\": 40, \"datalist\": 41, \"dd\": 42, \"defs\": 43, \"del\": 44, \"desc\": 45, \"details\": 46, \"dfn\": 47, \"dialog\": 48, \"dir\": 49, \"div\": 50, \"dl\": 51, \"dt\": 52, \"ellipse\": 53, \"em\": 54, \"embed\": 55, \"feBlend\": 56, \"feColorMatrix\": 57, \"feComponentTransfer\": 58, \"feComposite\": 59, \"feConvolveMatrix\": 60, \"feDiffuseLighting\": 61, \"feDisplacementMap\": 62, \"feDistantLight\": 63, \"feFlood\": 64, \"feFuncA\": 65, \"feFuncB\": 66, \"feFuncG\": 67, \"feFuncR\": 68, \"feGaussianBlur\": 69, \"feImage\": 70, \"feMerge\": 71, \"feMergeNode\": 72, \"feMorphology\": 73, \"feOffset\": 74, \"fePointLight\": 75, \"feSpecularLighting\": 76, \"feSpotLight\": 77, \"feTile\": 78, \"feTurbulence\": 79, \"fieldset\": 80, \"figcaption\": 81, \"figure\": 82, \"filter\": 83, \"font-face-format\": 84, \"font-face-name\": 85, \"font-face-src\": 86, \"font-face-uri\": 87, \"font-face\": 88, \"font\": 89, \"footer\": 90, \"foreignObject\": 91, \"form\": 92, \"frame\": 93, \"frameset\": 94, \"g\": 95, \"glyph\": 96, \"glyphRef\": 97, \"h1\": 98, \"h2\": 99, \"h3\": 100, \"h4\": 101, \"h5\": 102, \"h6\": 103, \"head\": 104, \"header\": 105, \"hgroup\": 106, \"hkern\": 107, \"hr\": 108, \"html\": 109, \"i\": 110, \"iframe\": 111, \"image\": 112, \"img\": 113, \"input\": 114, \"ins\": 115, \"kbd\": 116, \"keygen\": 117, \"label\": 118, \"legend\": 119, \"li\": 120, \"line\": 121, \"linearGradient\": 122, \"link\": 123, \"main\": 124, \"map\": 125, \"mark\": 126, \"marker\": 127, \"marquee\": 128, \"mask\": 129, \"math\": 130, \"menu\": 131, \"menuitem\": 132, \"meta\": 133, \"metadata\": 134, \"meter\": 135, \"missing-glyph\": 136, \"mpath\": 137, \"nav\": 138, \"nobr\": 139, \"noembed\": 140, \"noframes\": 141, \"noscript\": 142, \"object\": 143, \"ol\": 144, \"optgroup\": 145, \"option\": 146, \"output\": 147, \"p\": 148, \"param\": 149, \"path\": 150, \"pattern\": 151, \"picture\": 152, \"plaintext\": 153, \"polygon\": 154, \"polyline\": 155, \"portal\": 156, \"pre\": 157, \"progress\": 158, \"q\": 159, \"radialGradient\": 160, \"rb\": 161, \"rect\": 162, \"rp\": 163, \"rt\": 164, \"rtc\": 165, \"ruby\": 166, \"s\": 167, \"samp\": 168, \"script\": 169, \"section\": 170, \"select\": 171, \"set\": 172, \"shadow\": 173, \"slot\": 174, \"small\": 175, \"source\": 176, \"spacer\": 177, \"span\": 178, \"stop\": 179, \"strike\": 180, \"strong\": 181, \"style\": 182, \"sub\": 183, \"summary\": 184, \"sup\": 185, \"svg\": 186, \"switch\": 187, \"symbol\": 188, \"table\": 189, \"tbody\": 190, \"td\": 191, \"template\": 192, \"text\": 193, \"textPath\": 194, \"textarea\": 195, \"tfoot\": 196, \"th\": 197, \"thead\": 198, \"time\": 199, \"title\": 200, \"tr\": 201, \"track\": 202, \"tref\": 203, \"tspan\": 204, \"tt\": 205, \"u\": 206, \"ul\": 207, \"use\": 208, \"var\": 209, \"video\": 210, \"view\": 211, \"vkern\": 212, \"wbr\": 213, \"xmp\": 214},\n add_prefix_space=True,)\n \n\n\nGo to URL for sample script."
] |
fill-mask | transformers | https://github.com/GKLMIP/Pretrained-Models-For-Khmer
If you use our model, please consider citing our paper:
```
@article{,
author="Jiang, Shengyi
and Fu, Sihui
and Lin, Nankai
and Fu, Yingwen",
title="Pre-trained Models and Evaluation Data for the Khmer Language",
year="2021",
publisher="Tsinghua Science and Technology",
}
``` | {} | GKLMIP/bert-khmer-base-uncased-tokenized | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| URL
If you use our model, please consider citing our paper:
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
28
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | https://github.com/GKLMIP/Pretrained-Models-For-Khmer
If you use our model, please consider citing our paper:
```
@article{,
author="Jiang, Shengyi
and Fu, Sihui
and Lin, Nankai
and Fu, Yingwen",
title="Pre-trained Models and Evaluation Data for the Khmer Language",
year="2021",
publisher="Tsinghua Science and Technology",
}
``` | {} | GKLMIP/bert-khmer-base-uncased | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| URL
If you use our model, please consider citing our paper:
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
28
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | https://github.com/GKLMIP/Pretrained-Models-For-Khmer
If you use our model, please consider citing our paper:
```
@article{,
author="Jiang, Shengyi
and Fu, Sihui
and Lin, Nankai
and Fu, Yingwen",
title="Pre-trained Models and Evaluation Data for the Khmer Language",
year="2021",
publisher="Tsinghua Science and Technology",
}
``` | {} | GKLMIP/bert-khmer-small-uncased-tokenized | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| URL
If you use our model, please consider citing our paper:
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
28
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | https://github.com/GKLMIP/Pretrained-Models-For-Khmer
If you use our model, please consider citing our paper:
```
@article{,
author="Jiang, Shengyi
and Fu, Sihui
and Lin, Nankai
and Fu, Yingwen",
title="Pre-trained Models and Evaluation Data for the Khmer Language",
year="2021",
publisher="Tsinghua Science and Technology",
}
``` | {} | GKLMIP/bert-khmer-small-uncased | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| URL
If you use our model, please consider citing our paper:
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
28
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | The Usage of tokenizer for Lao is in https://github.com/GKLMIP/Pretrained-Models-For-Laos. | {} | GKLMIP/bert-laos-base-uncased | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| The Usage of tokenizer for Lao is in URL | [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
28
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | The Usage of tokenizer for Lao is in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
| {} | GKLMIP/bert-laos-small-uncased | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| The Usage of tokenizer for Lao is in URL
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
28
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Jiang, Shengyi
and Huang, Xiuwen
and Cai, Xiaonan
and Lin, Nankai",
title="Pre-trained Models and Evaluation Data for the Myanmar Language",
booktitle="The 28th International Conference on Neural Information Processing",
year="2021",
publisher="Springer International Publishing",
address="Cham",
}
``` | {} | GKLMIP/bert-myanmar-base-uncased | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| The Usage of tokenizer for Myanmar is same as Laos in URL
If you use our model, please consider citing our paper:
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
28
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Jiang, Shengyi
and Huang, Xiuwen
and Cai, Xiaonan
and Lin, Nankai",
title="Pre-trained Models and Evaluation Data for the Myanmar Language",
booktitle="The 28th International Conference on Neural Information Processing",
year="2021",
publisher="Springer International Publishing",
address="Cham",
}
``` | {} | GKLMIP/bert-myanmar-small-uncased | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| The Usage of tokenizer for Myanmar is same as Laos in URL
If you use our model, please consider citing our paper:
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
28
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | https://github.com/GKLMIP/Pretrained-Models-For-Tagalog
If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Jiang, Shengyi
and Fu, Yingwen
and Lin, Xiaotian
and Lin, Nankai",
title="Pre-trained Language models for Tagalog with Multi-source data",
booktitle="Natural Language Processing and Chinese Computing",
year="2021",
publisher="Springer International Publishing",
address="Cham",
}
``` | {} | GKLMIP/bert-tagalog-base-uncased | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| URL
If you use our model, please consider citing our paper:
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
28
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | https://github.com/GKLMIP/Pretrained-Models-For-Khmer
If you use our model, please consider citing our paper:
```
@article{,
author="Jiang, Shengyi
and Fu, Sihui
and Lin, Nankai
and Fu, Yingwen",
title="Pre-trained Models and Evaluation Data for the Khmer Language",
year="2021",
publisher="Tsinghua Science and Technology",
}
``` | {} | GKLMIP/electra-khmer-base-uncased-tokenized | null | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #electra #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| URL
If you use our model, please consider citing our paper:
| [] | [
"TAGS\n#transformers #pytorch #electra #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
29
] | [
"TAGS\n#transformers #pytorch #electra #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | https://github.com/GKLMIP/Pretrained-Models-For-Khmer
If you use our model, please consider citing our paper:
```
@article{,
author="Jiang, Shengyi
and Fu, Sihui
and Lin, Nankai
and Fu, Yingwen",
title="Pre-trained Models and Evaluation Data for the Khmer Language",
year="2021",
publisher="Tsinghua Science and Technology",
}
``` | {} | GKLMIP/electra-khmer-base-uncased | null | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #electra #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| URL
If you use our model, please consider citing our paper:
| [] | [
"TAGS\n#transformers #pytorch #electra #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
29
] | [
"TAGS\n#transformers #pytorch #electra #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | https://github.com/GKLMIP/Pretrained-Models-For-Khmer
If you use our model, please consider citing our paper:
```
@article{,
author="Jiang, Shengyi
and Fu, Sihui
and Lin, Nankai
and Fu, Yingwen",
title="Pre-trained Models and Evaluation Data for the Khmer Language",
year="2021",
publisher="Tsinghua Science and Technology",
}
``` | {} | GKLMIP/electra-khmer-small-uncased | null | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #electra #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| URL
If you use our model, please consider citing our paper:
| [] | [
"TAGS\n#transformers #pytorch #electra #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
29
] | [
"TAGS\n#transformers #pytorch #electra #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | transformers | The Usage of tokenizer for Lao is in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
| {} | GKLMIP/electra-laos-base-uncased | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #electra #pretraining #endpoints_compatible #region-us
| The Usage of tokenizer for Lao is in URL
| [] | [
"TAGS\n#transformers #pytorch #electra #pretraining #endpoints_compatible #region-us \n"
] | [
24
] | [
"TAGS\n#transformers #pytorch #electra #pretraining #endpoints_compatible #region-us \n"
] |
null | transformers | The Usage of tokenizer for Lao is in https://github.com/GKLMIP/Pretrained-Models-For-Laos. | {} | GKLMIP/electra-laos-small-uncased | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #electra #pretraining #endpoints_compatible #region-us
| The Usage of tokenizer for Lao is in URL | [] | [
"TAGS\n#transformers #pytorch #electra #pretraining #endpoints_compatible #region-us \n"
] | [
24
] | [
"TAGS\n#transformers #pytorch #electra #pretraining #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Jiang, Shengyi
and Huang, Xiuwen
and Cai, Xiaonan
and Lin, Nankai",
title="Pre-trained Models and Evaluation Data for the Myanmar Language",
booktitle="The 28th International Conference on Neural Information Processing",
year="2021",
publisher="Springer International Publishing",
address="Cham",
}
``` | {} | GKLMIP/electra-myanmar-base-uncased | null | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #electra #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| The Usage of tokenizer for Myanmar is same as Laos in URL
If you use our model, please consider citing our paper:
| [] | [
"TAGS\n#transformers #pytorch #electra #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
29
] | [
"TAGS\n#transformers #pytorch #electra #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | transformers | The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Jiang, Shengyi
and Huang, Xiuwen
and Cai, Xiaonan
and Lin, Nankai",
title="Pre-trained Models and Evaluation Data for the Myanmar Language",
booktitle="The 28th International Conference on Neural Information Processing",
year="2021",
publisher="Springer International Publishing",
address="Cham",
}
``` | {} | GKLMIP/electra-myanmar-small-uncased | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #electra #pretraining #endpoints_compatible #region-us
| The Usage of tokenizer for Myanmar is same as Laos in URL
If you use our model, please consider citing our paper:
| [] | [
"TAGS\n#transformers #pytorch #electra #pretraining #endpoints_compatible #region-us \n"
] | [
24
] | [
"TAGS\n#transformers #pytorch #electra #pretraining #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | https://github.com/GKLMIP/Pretrained-Models-For-Tagalog
If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Jiang, Shengyi
and Fu, Yingwen
and Lin, Xiaotian
and Lin, Nankai",
title="Pre-trained Language models for Tagalog with Multi-source data",
booktitle="Natural Language Processing and Chinese Computing",
year="2021",
publisher="Springer International Publishing",
address="Cham",
}
``` | {} | GKLMIP/electra-tagalog-base-uncased | null | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #electra #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| URL
If you use our model, please consider citing our paper:
| [] | [
"TAGS\n#transformers #pytorch #electra #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
29
] | [
"TAGS\n#transformers #pytorch #electra #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Huang, Xixuan
and Lin, Nankai
and Li, Kexin
and Wang, Lianxi
and Gan SuiFu",
title="HinPLMs: Pre-trained Language Models for Hindi",
booktitle="The International Conference on Asian Language Processing",
year="2021",
publisher="IEEE Xplore"
}
``` | {} | GKLMIP/roberta-hindi-romanized | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| If you use our model, please consider citing our paper:
| [] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
28
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Huang, Xixuan
and Lin, Nankai
and Li, Kexin
and Wang, Lianxi
and Gan SuiFu",
title="HinPLMs: Pre-trained Language Models for Hindi",
booktitle="The International Conference on Asian Language Processing",
year="2021",
publisher="IEEE Xplore"
}
``` | {} | GKLMIP/roberta-hindi-devanagari | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| If you use our model, please consider citing our paper:
| [] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
28
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | https://github.com/GKLMIP/Pretrained-Models-For-Tagalog
If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Jiang, Shengyi
and Fu, Yingwen
and Lin, Xiaotian
and Lin, Nankai",
title="Pre-trained Language models for Tagalog with Multi-source data",
booktitle="Natural Language Processing and Chinese Computing",
year="2021",
publisher="Springer International Publishing",
address="Cham",
}
``` | {} | GKLMIP/roberta-tagalog-base | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| URL
If you use our model, please consider citing our paper:
| [] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
28
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | null | Naming pattern:
1. `GPL/${dataset}-msmarco-distilbert-gpl`: Model with training order of (1) MarginMSE on MSMARCO -> (2) GPL on ${dataset};
2. `GPL/${dataset}-tsdae-msmarco-distilbert-gpl`: Model with training order of (1) TSDAE on ${dataset} -> (2) MarginMSE on MSMARCO -> (3) GPL on ${dataset};
3. `GPL/msmarco-distilbert-margin-mse`: Model trained on MSMARCO with MarginMSE;
4. `GPL/${dataset}-tsdae-msmarco-distilbert-margin-mse`: Model with training order of (1) TSDAE on ${dataset} -> (2) MarginMSE on MSMARCO;
5. `GPL/${dataset}-distilbert-tas-b-gpl-self_miner`: Starting from the [tas-b model](https://huggingface.co/sentence-transformers/msmarco-distilbert-base-tas-b), the models were trained with GPL on the target corpus ${dataset} with the base model itself as the negative miner (here noted as "self_miner").
Actually, models in 1. and 2. are built on top of 3. and 4., respectively.
| {} | GPL/README | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| Naming pattern:
1. 'GPL/${dataset}-msmarco-distilbert-gpl': Model with training order of (1) MarginMSE on MSMARCO -> (2) GPL on ${dataset};
2. 'GPL/${dataset}-tsdae-msmarco-distilbert-gpl': Model with training order of (1) TSDAE on ${dataset} -> (2) MarginMSE on MSMARCO -> (3) GPL on ${dataset};
3. 'GPL/msmarco-distilbert-margin-mse': Model trained on MSMARCO with MarginMSE;
4. 'GPL/${dataset}-tsdae-msmarco-distilbert-margin-mse': Model with training order of (1) TSDAE on ${dataset} -> (2) MarginMSE on MSMARCO;
5. 'GPL/${dataset}-distilbert-tas-b-gpl-self_miner': Starting from the tas-b model, the models were trained with GPL on the target corpus ${dataset} with the base model itself as the negative miner (here noted as "self_miner").
Actually, models in 1. and 2. are built on top of 3. and 4., respectively.
| [] | [
"TAGS\n#region-us \n"
] | [
5
] | [
"TAGS\n#region-us \n"
] |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | GPL/bioasq-1m-msmarco-distilbert-gpl | null | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 140000 with parameters:
Loss:
'URL.MarginDistillationLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
33,
41,
30,
58,
26,
56,
5,
5
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:## Full Model Architecture## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | GPL/bioasq-1m-tsdae-msmarco-distilbert-gpl | null | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 140000 with parameters:
Loss:
'URL.MarginDistillationLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
33,
41,
30,
58,
26,
56,
5,
5
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:## Full Model Architecture## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | GPL/cqadupstack-msmarco-distilbert-gpl | null | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 140000 with parameters:
Loss:
'URL.MarginDistillationLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
33,
41,
30,
58,
26,
56,
5,
5
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:## Full Model Architecture## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | GPL/cqadupstack-tsdae-msmarco-distilbert-gpl | null | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 140000 with parameters:
Loss:
'URL.MarginDistillationLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
33,
41,
30,
58,
26,
56,
5,
5
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:## Full Model Architecture## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | GPL/fiqa-msmarco-distilbert-gpl | null | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 140000 with parameters:
Loss:
'URL.MarginDistillationLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
33,
41,
30,
58,
26,
56,
5,
5
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:## Full Model Architecture## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | GPL/fiqa-tsdae-msmarco-distilbert-gpl | null | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 140000 with parameters:
Loss:
'URL.MarginDistillationLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
33,
41,
30,
58,
26,
56,
5,
5
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:## Full Model Architecture## Citing & Authors"
] |
feature-extraction | transformers | This is the zero-shot baseline model in the paper ["GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense Retrieval"](https://arxiv.org/abs/2112.07577)
The training setup:
1. Start from `distilbert-base-uncased`;
2. Mine 50 hard negatives for each query on MS MARCO with `sentence-transformers/msmarco-distilbert-base-v3` and `sentence-transformers/msmarco-MiniLM-L-6-v3`;
3. Do Margin-MSE training on the tuples (including queries, gold relevant, and hard negatives) with the teacher model `cross-encoder/ms-marco-MiniLM-L-6-v2` for 70K steps with batch size 75, max. sequence-length 350.
| {} | GPL/msmarco-distilbert-margin-mse | null | [
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"arxiv:2112.07577",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2112.07577"
] | [] | TAGS
#transformers #pytorch #distilbert #feature-extraction #arxiv-2112.07577 #endpoints_compatible #region-us
| This is the zero-shot baseline model in the paper "GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense Retrieval"
The training setup:
1. Start from 'distilbert-base-uncased';
2. Mine 50 hard negatives for each query on MS MARCO with 'sentence-transformers/msmarco-distilbert-base-v3' and 'sentence-transformers/msmarco-MiniLM-L-6-v3';
3. Do Margin-MSE training on the tuples (including queries, gold relevant, and hard negatives) with the teacher model 'cross-encoder/ms-marco-MiniLM-L-6-v2' for 70K steps with batch size 75, max. sequence-length 350.
| [] | [
"TAGS\n#transformers #pytorch #distilbert #feature-extraction #arxiv-2112.07577 #endpoints_compatible #region-us \n"
] | [
36
] | [
"TAGS\n#transformers #pytorch #distilbert #feature-extraction #arxiv-2112.07577 #endpoints_compatible #region-us \n"
] |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | GPL/robust04-msmarco-distilbert-gpl | null | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 140000 with parameters:
Loss:
'URL.MarginDistillationLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
33,
41,
30,
58,
26,
56,
5,
5
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:## Full Model Architecture## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | GPL/robust04-tsdae-msmarco-distilbert-gpl | null | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 140000 with parameters:
Loss:
'URL.MarginDistillationLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
33,
41,
30,
58,
26,
56,
5,
5
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:## Full Model Architecture## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | GPL/scifact-msmarco-distilbert-gpl | null | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 140000 with parameters:
Loss:
'URL.MarginDistillationLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
33,
41,
30,
58,
26,
56,
5,
5
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:## Full Model Architecture## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | GPL/trec-covid-v2-msmarco-distilbert-gpl | null | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #has_space #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 140000 with parameters:
Loss:
'URL.MarginDistillationLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #has_space #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
37,
41,
30,
58,
26,
56,
5,
5
] | [
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #has_space #region-us \n# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 140000 with parameters:\n\n\nLoss:\n\n'URL.MarginDistillationLoss' \n\nParameters of the fit()-Method:## Full Model Architecture## Citing & Authors"
] |
text-generation | transformers | # Pinkie Pie Chatbot
used from r3dhummingbird! | {"license": "mit", "tags": ["conversational"]} | GabbyDaBUNBUN/DialoGPT-medium-PinkiePie | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Pinkie Pie Chatbot
used from r3dhummingbird! | [
"# Pinkie Pie Chatbot\nused from r3dhummingbird!"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Pinkie Pie Chatbot\nused from r3dhummingbird!"
] | [
43,
14
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Pinkie Pie Chatbot\nused from r3dhummingbird!"
] |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | Galaxy/DialoGPT-small-hermoine | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model | [
"# Harry Potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT Model"
] |
text-generation | transformers |
# Indonesian GPT-2 finetuned on Indonesian academic journals
This is the [Indonesian gpt2-small model](https://huggingface.co/flax-community/gpt2-small-indonesian) fine-tuned to abstracts of Indonesian academic journals. All training was done on a TPUv2-8 VM sponsored by [TPU Research Cloud](https://sites.research.google/trc/).
The demo can be found [here](https://huggingface.co/spaces/flax-community/gpt2-indonesian).
## How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness,
we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='Galuh/id-journal-gpt2')
>>> set_seed(42)
>>> generator("Penelitian ini menggunakan teknik DNA barcoding untuk", max_length=30, num_return_sequences=5)
[{'generated_text': 'Penelitian ini menggunakan teknik DNA barcoding untuk mendeteksi perubahan genetik bakteri pada udang windu. Empat tahap telah dilakukan, meliputi preparasi media untuk larva,'},
{'generated_text': 'Penelitian ini menggunakan teknik DNA barcoding untuk identifikasi gen pengasil flavonoid. Data yang diperoleh dari hasil PCR diidentifikasi dengan teknik sekuensing'},
{'generated_text': 'Penelitian ini menggunakan teknik DNA barcoding untuk mengekstraksi fragmen DNA dari sampel kulit buaya dan tulang anjing, di mana proses ini melibatkan karakterisasi enzim yang'},
{'generated_text': 'Penelitian ini menggunakan teknik DNA barcoding untuk melakukan transformasi. Tahapan transformasi meliputi seleksi sel dengan urutan (2, 8, 16,..., 18) dan'},
{'generated_text': 'Penelitian ini menggunakan teknik DNA barcoding untuk amplifikasi genom DNA dengan menggunakan primer TG8226 dan TG806. Metode pol'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('Galuh/id-journal-gpt2')
model = GPT2Model.from_pretrained('Galuh/id-journal-gpt2')
text = "Ubah dengan teks apa saja."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('Galuh/id-journal-gpt2')
model = TFGPT2Model.from_pretrained('Galuh/id-journal-gpt2')
text = "Ubah dengan teks apa saja."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Limitations and bias
This model is originally the [Indonesian gpt2-small model](https://huggingface.co/flax-community/gpt2-small-indonesian), thus this model is also subject to the same [limitations and bias as the original model](https://huggingface.co/flax-community/gpt2-small-indonesian#limitations-and-bias). More detailed bias and analysis on this specific model is coming soon.
## Training data
The model was trained on a dataset of Indonesian journals. We only trained this model on the abstracts. We extract the abstract by writing a script to find any text that is located between the word "Abstrak" (abstract) and "Kata kunci" (keywords). The extraction script can be found [here](https://github.com/galuhsahid/id-journal-gpt2/). To separate each abstract, we also add an end of text token (`<|endoftext|>`) between each abstract.
The information of the sub-dataset and the distribution of the training and evaluation dataset are as follows:
| split | count | percentage |
| ---------- | ---------- | -------------- |
| train | 146,248 | 90% |
| validation | 16,250 | 10% |
## Training procedure
The model was trained on a TPUv2-8 VM provided by [TPU Research Cloud](https://sites.research.google/trc/). The training duration was `2h 30m 57s`.
### Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| dataset | train loss | eval loss | eval perplexity |
| ---------- | ---------- | -------------- | ---------- |
| Indonesian journals dataset (abstract only) | 2.913 | 2.855 | 17.37 |
### Tracking
The training process was tracked in [TensorBoard](https://huggingface.co/Galuh/id-journal-gpt2/tensorboard). | {"language": "id", "widget": [{"text": "Penelitian ini bertujuan untuk menentukan identitas invertebrata laut dari Perairan Papua dengan teknik DNA barcoding"}]} | Galuh/id-journal-gpt2 | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"id",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"id"
] | TAGS
#transformers #pytorch #jax #tensorboard #gpt2 #text-generation #id #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
| Indonesian GPT-2 finetuned on Indonesian academic journals
==========================================================
This is the Indonesian gpt2-small model fine-tuned to abstracts of Indonesian academic journals. All training was done on a TPUv2-8 VM sponsored by TPU Research Cloud.
The demo can be found here.
How to use
----------
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness,
we set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Limitations and bias
--------------------
This model is originally the Indonesian gpt2-small model, thus this model is also subject to the same limitations and bias as the original model. More detailed bias and analysis on this specific model is coming soon.
Training data
-------------
The model was trained on a dataset of Indonesian journals. We only trained this model on the abstracts. We extract the abstract by writing a script to find any text that is located between the word "Abstrak" (abstract) and "Kata kunci" (keywords). The extraction script can be found here. To separate each abstract, we also add an end of text token ('<|endoftext|>') between each abstract.
The information of the sub-dataset and the distribution of the training and evaluation dataset are as follows:
split: train, count: 146,248, percentage: 90%
split: validation, count: 16,250, percentage: 10%
Training procedure
------------------
The model was trained on a TPUv2-8 VM provided by TPU Research Cloud. The training duration was '2h 30m 57s'.
### Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
### Tracking
The training process was tracked in TensorBoard.
| [
"### Evaluation results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):",
"### Tracking\n\n\nThe training process was tracked in TensorBoard."
] | [
"TAGS\n#transformers #pytorch #jax #tensorboard #gpt2 #text-generation #id #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"### Evaluation results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):",
"### Tracking\n\n\nThe training process was tracked in TensorBoard."
] | [
47,
23,
13
] | [
"TAGS\n#transformers #pytorch #jax #tensorboard #gpt2 #text-generation #id #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n### Evaluation results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):### Tracking\n\n\nThe training process was tracked in TensorBoard."
] |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-Indonesian
This is the model for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned
[facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Indonesian Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Galuh/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("Galuh/wav2vec2-large-xlsr-indonesian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Galuh/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("Galuh/wav2vec2-large-xlsr-indonesian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 18.32 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/galuhsahid/wav2vec2-indonesian)
(will be available soon) | {"language": "id", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Indonesian by Galuh", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice id", "type": "common_voice", "args": "id"}, "metrics": [{"type": "wer", "value": 21.07, "name": "Test WER"}]}]}]} | Galuh/wav2vec2-large-xlsr-indonesian | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"id",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"id"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-Indonesian
This is the model for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned
facebook/wav2vec2-large-xlsr-53
model on the Indonesian Common Voice dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
Test Result: 18.32 %
## Training
The Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found here
(will be available soon) | [
"# Wav2Vec2-Large-XLSR-Indonesian\n\nThis is the model for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned \nfacebook/wav2vec2-large-xlsr-53\nmodel on the Indonesian Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nTest Result: 18.32 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here \n(will be available soon)"
] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-Indonesian\n\nThis is the model for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned \nfacebook/wav2vec2-large-xlsr-53\nmodel on the Indonesian Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nTest Result: 18.32 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here \n(will be available soon)"
] | [
66,
79,
18,
26,
53
] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n# Wav2Vec2-Large-XLSR-Indonesian\n\nThis is the model for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned \nfacebook/wav2vec2-large-xlsr-53\nmodel on the Indonesian Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.## Usage\nThe model can be used directly (without a language model) as follows:## Evaluation\n\nThe model can be evaluated as follows on the Indonesian test data of Common Voice.\n\n\n\nTest Result: 18.32 %## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO\n\nThe script used for training can be found here \n(will be available soon)"
] |
text-generation | transformers |
# Gamer Bot DialoGPT Model | {"tags": ["conversational"]} | GamerMan02/DialoGPT-medium-gamerbot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Gamer Bot DialoGPT Model | [
"# Gamer Bot DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Gamer Bot DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Gamer Bot DialoGPT Model"
] |
text-generation | transformers | This be a test | {} | GammaPTest/e_bot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| This be a test | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
36
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
feature-extraction | transformers | CODER: Knowledge infused cross-lingual medical term embedding for term normalization.
English Version. Old name. This model is not UMLSBert!!!
Github Link: https://github.com/GanjinZero/CODER
```
@article{YUAN2022103983,
title = {CODER: Knowledge-infused cross-lingual medical term embedding for term normalization},
journal = {Journal of Biomedical Informatics},
pages = {103983},
year = {2022},
issn = {1532-0464},
doi = {https://doi.org/10.1016/j.jbi.2021.103983},
url = {https://www.sciencedirect.com/science/article/pii/S1532046421003129},
author = {Zheng Yuan and Zhengyun Zhao and Haixia Sun and Jiao Li and Fei Wang and Sheng Yu},
keywords = {medical term normalization, cross-lingual, medical term representation, knowledge graph embedding, contrastive learning}
}
``` | {"language": ["en"], "license": "apache-2.0", "tags": ["bert", "biomedical"]} | GanjinZero/UMLSBert_ENG | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"biomedical",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #bert #feature-extraction #biomedical #en #license-apache-2.0 #endpoints_compatible #region-us
| CODER: Knowledge infused cross-lingual medical term embedding for term normalization.
English Version. Old name. This model is not UMLSBert!!!
Github Link: URL
| [] | [
"TAGS\n#transformers #pytorch #safetensors #bert #feature-extraction #biomedical #en #license-apache-2.0 #endpoints_compatible #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #feature-extraction #biomedical #en #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
feature-extraction | transformers | CODER: Knowledge infused cross-lingual medical term embedding for term normalization.
Multi lingual Version.
Github Link: https://github.com/GanjinZero/CODER
```
@article{YUAN2022103983,
title = {CODER: Knowledge-infused cross-lingual medical term embedding for term normalization},
journal = {Journal of Biomedical Informatics},
pages = {103983},
year = {2022},
issn = {1532-0464},
doi = {https://doi.org/10.1016/j.jbi.2021.103983},
url = {https://www.sciencedirect.com/science/article/pii/S1532046421003129},
author = {Zheng Yuan and Zhengyun Zhao and Haixia Sun and Jiao Li and Fei Wang and Sheng Yu},
keywords = {medical term normalization, cross-lingual, medical term representation, knowledge graph embedding, contrastive learning}
}
``` | {"language": ["en"], "license": "apache-2.0", "tags": ["bert", "biomedical"]} | GanjinZero/coder_all | null | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"biomedical",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #feature-extraction #biomedical #en #license-apache-2.0 #endpoints_compatible #region-us
| CODER: Knowledge infused cross-lingual medical term embedding for term normalization.
Multi lingual Version.
Github Link: URL
| [] | [
"TAGS\n#transformers #pytorch #bert #feature-extraction #biomedical #en #license-apache-2.0 #endpoints_compatible #region-us \n"
] | [
35
] | [
"TAGS\n#transformers #pytorch #bert #feature-extraction #biomedical #en #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
feature-extraction | transformers | CODER: Knowledge infused cross-lingual medical term embedding for term normalization.
English Version.
Github Link: https://github.com/GanjinZero/CODER
```
@article{YUAN2022103983,
title = {CODER: Knowledge-infused cross-lingual medical term embedding for term normalization},
journal = {Journal of Biomedical Informatics},
pages = {103983},
year = {2022},
issn = {1532-0464},
doi = {https://doi.org/10.1016/j.jbi.2021.103983},
url = {https://www.sciencedirect.com/science/article/pii/S1532046421003129},
author = {Zheng Yuan and Zhengyun Zhao and Haixia Sun and Jiao Li and Fei Wang and Sheng Yu},
keywords = {medical term normalization, cross-lingual, medical term representation, knowledge graph embedding, contrastive learning}
}
``` | {"language": ["en"], "license": "apache-2.0", "tags": ["bert", "biomedical"]} | GanjinZero/coder_eng | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"biomedical",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #bert #feature-extraction #biomedical #en #license-apache-2.0 #endpoints_compatible #region-us
| CODER: Knowledge infused cross-lingual medical term embedding for term normalization.
English Version.
Github Link: URL
| [] | [
"TAGS\n#transformers #pytorch #safetensors #bert #feature-extraction #biomedical #en #license-apache-2.0 #endpoints_compatible #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #feature-extraction #biomedical #en #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
feature-extraction | transformers | Automatic Biomedical Term Clustering by Learning Fine-grained Term Representations.
CODER++
Github Link: https://github.com/GanjinZero/CODER
```
@misc{https://doi.org/10.48550/arxiv.2204.00391,
doi = {10.48550/ARXIV.2204.00391},
url = {https://arxiv.org/abs/2204.00391},
author = {Zeng, Sihang and Yuan, Zheng and Yu, Sheng},
title = {Automatic Biomedical Term Clustering by Learning Fine-grained Term Representations},
publisher = {arXiv},
year = {2022}
}
``` | {"language": ["en"], "license": "apache-2.0", "tags": ["bert", "biomedical"]} | GanjinZero/coder_eng_pp | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"biomedical",
"en",
"arxiv:2204.00391",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2204.00391"
] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #bert #feature-extraction #biomedical #en #arxiv-2204.00391 #license-apache-2.0 #endpoints_compatible #region-us
| Automatic Biomedical Term Clustering by Learning Fine-grained Term Representations.
CODER++
Github Link: URL
| [] | [
"TAGS\n#transformers #pytorch #safetensors #bert #feature-extraction #biomedical #en #arxiv-2204.00391 #license-apache-2.0 #endpoints_compatible #region-us \n"
] | [
50
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #feature-extraction #biomedical #en #arxiv-2204.00391 #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Zhongli DialoGPT Model | {"tags": ["conversational"]} | Gappy/DialoGPT-small-Zhongli | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Zhongli DialoGPT Model | [
"# Zhongli DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Zhongli DialoGPT Model"
] | [
39,
8
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Zhongli DialoGPT Model"
] |
null | null |
# CRDNN with CTC/Attention and RNNLM trained on LibriSpeech
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on LibriSpeech (EN) within
SpeechBrain. For a better experience we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The given ASR model performance are:
| Release | hyperparams file | Test WER | Model link | GPUs |
|:-------------:|:---------------------------:| -----:| -----:| --------:|
| 20-05-22 | BPE_1000.yaml | 3.08 | Not Available | 1xV100 32GB |
| 20-05-22 | BPE_5000.yaml | 2.89 | Not Available | 1xV100 32GB |
## Pipeline description
This ASR system is composed with 3 different but linked blocks:
1. Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions of LibriSpeech.
2. Neural language model (RNNLM) trained on the full 10M words dataset.
3. Acoustic model (CRDNN + CTC/Attention). The CRDNN architecture is made of
N blocks of convolutional neural networks with normalisation and pooling on the
frequency domain. Then, a bidirectional LSTM is connected to a final DNN to obtain
the final acoustic representation that is given to the CTC and attention decoders.
## Intended uses & limitations
This model has been primilarly developed to be run within SpeechBrain as a pretrained ASR model
for the english language. Thanks to the flexibility of SpeechBrain, any of the 3 blocks
detailed above can be extracted and connected to you custom pipeline as long as SpeechBrain is
installed.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install \\we hide ! SpeechBrain is still private :p
```
Also, for this model, you need SentencePiece. Install with
```
pip install sentencepiece
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="Gastron/asr-crdnn-librispeech")
asr_model.transcribe_file("path_to_your_file.wav")
```
### Obtaining encoded features
The SpeechBrain EncoderDecoderASR() class also provides an easy way to encode
the speech signal without running the decoding phase by calling
``EncoderDecoderASR.encode_batch()``
#### Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/speechbrain/speechbrain}},
}
```
| {"language": "en", "license": "apache-2.0", "tags": ["ASR", "CTC", "Attention", "pytorch"], "datasets": ["librispeech"], "metrics": ["wer", "cer"]} | Gastron/asr-crdnn-librispeech | null | [
"ASR",
"CTC",
"Attention",
"pytorch",
"en",
"dataset:librispeech",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#ASR #CTC #Attention #pytorch #en #dataset-librispeech #license-apache-2.0 #region-us
| CRDNN with CTC/Attention and RNNLM trained on LibriSpeech
=========================================================
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on LibriSpeech (EN) within
SpeechBrain. For a better experience we encourage you to learn more about
SpeechBrain. The given ASR model performance are:
Pipeline description
--------------------
This ASR system is composed with 3 different but linked blocks:
1. Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions of LibriSpeech.
2. Neural language model (RNNLM) trained on the full 10M words dataset.
3. Acoustic model (CRDNN + CTC/Attention). The CRDNN architecture is made of
N blocks of convolutional neural networks with normalisation and pooling on the
frequency domain. Then, a bidirectional LSTM is connected to a final DNN to obtain
the final acoustic representation that is given to the CTC and attention decoders.
Intended uses & limitations
---------------------------
This model has been primilarly developed to be run within SpeechBrain as a pretrained ASR model
for the english language. Thanks to the flexibility of SpeechBrain, any of the 3 blocks
detailed above can be extracted and connected to you custom pipeline as long as SpeechBrain is
installed.
Install SpeechBrain
-------------------
First of all, please install SpeechBrain with the following command:
Also, for this model, you need SentencePiece. Install with
Please notice that we encourage you to read our tutorials and learn more about
SpeechBrain.
### Transcribing your own audio files
### Obtaining encoded features
The SpeechBrain EncoderDecoderASR() class also provides an easy way to encode
the speech signal without running the decoding phase by calling
''EncoderDecoderASR.encode\_batch()''
#### Referencing SpeechBrain
| [
"### Transcribing your own audio files",
"### Obtaining encoded features\n\n\nThe SpeechBrain EncoderDecoderASR() class also provides an easy way to encode\nthe speech signal without running the decoding phase by calling\n''EncoderDecoderASR.encode\\_batch()''",
"#### Referencing SpeechBrain"
] | [
"TAGS\n#ASR #CTC #Attention #pytorch #en #dataset-librispeech #license-apache-2.0 #region-us \n",
"### Transcribing your own audio files",
"### Obtaining encoded features\n\n\nThe SpeechBrain EncoderDecoderASR() class also provides an easy way to encode\nthe speech signal without running the decoding phase by calling\n''EncoderDecoderASR.encode\\_batch()''",
"#### Referencing SpeechBrain"
] | [
37,
11,
56,
8
] | [
"TAGS\n#ASR #CTC #Attention #pytorch #en #dataset-librispeech #license-apache-2.0 #region-us \n### Transcribing your own audio files### Obtaining encoded features\n\n\nThe SpeechBrain EncoderDecoderASR() class also provides an easy way to encode\nthe speech signal without running the decoding phase by calling\n''EncoderDecoderASR.encode\\_batch()''#### Referencing SpeechBrain"
] |
automatic-speech-recognition | speechbrain |
# CRDNN with Attention trained on LP
This is a an initial model, partly wrong configuration, just to show an initial example.
| {"language": "fi", "tags": ["automatic-speech-recognition", "Attention", "pytorch", "speechbrain"], "metrics": ["wer", "cer"]} | Gastron/lp-initial-aed-short | null | [
"speechbrain",
"automatic-speech-recognition",
"Attention",
"pytorch",
"fi",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fi"
] | TAGS
#speechbrain #automatic-speech-recognition #Attention #pytorch #fi #region-us
|
# CRDNN with Attention trained on LP
This is a an initial model, partly wrong configuration, just to show an initial example.
| [
"# CRDNN with Attention trained on LP\n\nThis is a an initial model, partly wrong configuration, just to show an initial example."
] | [
"TAGS\n#speechbrain #automatic-speech-recognition #Attention #pytorch #fi #region-us \n",
"# CRDNN with Attention trained on LP\n\nThis is a an initial model, partly wrong configuration, just to show an initial example."
] | [
24,
27
] | [
"TAGS\n#speechbrain #automatic-speech-recognition #Attention #pytorch #fi #region-us \n# CRDNN with Attention trained on LP\n\nThis is a an initial model, partly wrong configuration, just to show an initial example."
] |
text-generation | transformers |
# Guy DialoGPT Model | {"tags": ["conversational"]} | Geezy/DialoGPT-small-guy | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Guy DialoGPT Model | [
"# Guy DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Guy DialoGPT Model"
] | [
39,
6
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Guy DialoGPT Model"
] |
text-generation | transformers | #Harry Potter DialoGPT Model | {"tags": ["conversational"]} | GenDelport/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| #Harry Potter DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-dutch-cased-finetuned-gem
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7518 | 1.0 | 2133 | 1.8428 |
| 1.5679 | 2.0 | 4266 | 1.8729 |
| 1.3332 | 3.0 | 6399 | 1.8767 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.3
| {"language": ["nl"], "tags": ["generated_from_trainer"], "model_index": [{"name": "bert-base-dutch-cased-finetuned-gem", "results": [{"task": {"name": "Masked Language Modeling", "type": "fill-mask"}}]}]} | GeniusVoice/bert-base-dutch-cased-finetuned-gem | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"nl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #tensorboard #safetensors #bert #fill-mask #generated_from_trainer #nl #autotrain_compatible #endpoints_compatible #region-us
| bert-base-dutch-cased-finetuned-gem
===================================
This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on an unkown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.8767
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.8.2
* Pytorch 1.9.0+cu102
* Datasets 1.9.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.9.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #safetensors #bert #fill-mask #generated_from_trainer #nl #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.9.0\n* Tokenizers 0.10.3"
] | [
43,
103,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #safetensors #bert #fill-mask #generated_from_trainer #nl #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0### Training results### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.9.0\n* Tokenizers 0.10.3"
] |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 165 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 24,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | GeniusVoice/gv-semanticsearch-dutch-cased | null | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 165 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 165 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 165 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
31,
41,
30,
58,
26,
72,
5,
5
] | [
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 165 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:## Full Model Architecture## Citing & Authors"
] |
null | null |
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
## Example Usage
```python
import gensim.downloader as api
model = api.load("glove-twitter-25", from_hf=True)
model.most_similar(positive=['fruit', 'flower'], topn=1)
"""
Output:
[('cherry', 0.9183273911476135)]
"""
``` | {"license": "pddl", "tags": ["glove", "gensim"]} | Gensim/glove-twitter-25 | null | [
"glove",
"gensim",
"license:pddl",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#glove #gensim #license-pddl #has_space #region-us
|
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* URL
* URL
## Example Usage
| [
"# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL",
"## Example Usage"
] | [
"TAGS\n#glove #gensim #license-pddl #has_space #region-us \n",
"# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL",
"## Example Usage"
] | [
20,
41,
4
] | [
"TAGS\n#glove #gensim #license-pddl #has_space #region-us \n# Glove Twitter \n\nPre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.\n\nRead more:\n* URL\n* URL## Example Usage"
] |
fill-mask | transformers |
# bert-base-10lang-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
This model handles the following languages: english, french, spanish, german, chinese, arabic, russian, portuguese, italian, and urdu. It produces the same representations as [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) while being 22.5% smaller in size.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-10lang-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-10lang-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": ["multilingual", "en", "fr", "es", "de", "zh", "ar", "ru", "pt", "it", "ur"], "license": "apache-2.0", "datasets": "wikipedia", "widget": [{"text": "Google generated 46 billion [MASK] in revenue."}, {"text": "Paris is the capital of [MASK]."}, {"text": "Algiers is the largest city in [MASK]."}, {"text": "Paris est la [MASK] de la France."}, {"text": "Paris est la capitale de la [MASK]."}, {"text": "L'\u00e9lection am\u00e9ricaine a eu [MASK] en novembre 2020."}, {"text": "\u062a\u0642\u0639 \u0633\u0648\u064a\u0633\u0631\u0627 \u0641\u064a [MASK] \u0623\u0648\u0631\u0648\u0628\u0627"}, {"text": "\u0625\u0633\u0645\u064a \u0645\u062d\u0645\u062f \u0648\u0623\u0633\u0643\u0646 \u0641\u064a [MASK]."}]} | Geotrend/bert-base-10lang-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"en",
"fr",
"es",
"de",
"zh",
"ar",
"ru",
"pt",
"it",
"ur",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual",
"en",
"fr",
"es",
"de",
"zh",
"ar",
"ru",
"pt",
"it",
"ur"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #en #fr #es #de #zh #ar #ru #pt #it #ur #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-10lang-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
This model handles the following languages: english, french, spanish, german, chinese, arabic, russian, portuguese, italian, and urdu. It produces the same representations as bert-base-multilingual-cased while being 22.5% smaller in size.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-10lang-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nThis model handles the following languages: english, french, spanish, german, chinese, arabic, russian, portuguese, italian, and urdu. It produces the same representations as bert-base-multilingual-cased while being 22.5% smaller in size.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #en #fr #es #de #zh #ar #ru #pt #it #ur #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-10lang-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nThis model handles the following languages: english, french, spanish, german, chinese, arabic, russian, portuguese, italian, and urdu. It produces the same representations as bert-base-multilingual-cased while being 22.5% smaller in size.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
71,
141,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #en #fr #es #de #zh #ar #ru #pt #it #ur #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-10lang-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nThis model handles the following languages: english, french, spanish, german, chinese, arabic, russian, portuguese, italian, and urdu. It produces the same representations as bert-base-multilingual-cased while being 22.5% smaller in size.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-15lang-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
The measurements below have been computed on a [Google Cloud n1-standard-1 machine (1 vCPU, 3.75 GB)](https://cloud.google.com/compute/docs/machine-types\#n1_machine_type):
| Model | Num parameters | Size | Memory | Loading time |
| ------------------------------- | -------------- | -------- | -------- | ------------ |
| bert-base-multilingual-cased | 178 million | 714 MB | 1400 MB | 4.2 sec |
| Geotrend/bert-base-15lang-cased | 141 million | 564 MB | 1098 MB | 3.1 sec |
Handled languages: en, fr, es, de, zh, ar, ru, vi, el, bg, th, tr, hi, ur and sw.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-15lang-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-15lang-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
| {"language": ["multilingual", "en", "fr", "es", "de", "zh", "ar", "ru", "vi", "el", "bg", "th", "tr", "hi", "ur", "sw"], "license": "apache-2.0", "datasets": "wikipedia", "widget": [{"text": "Google generated 46 billion [MASK] in revenue."}, {"text": "Paris is the capital of [MASK]."}, {"text": "Algiers is the largest city in [MASK]."}, {"text": "Paris est la [MASK] de la France."}, {"text": "Paris est la capitale de la [MASK]."}, {"text": "L'\u00e9lection am\u00e9ricaine a eu [MASK] en novembre 2020."}, {"text": "\u062a\u0642\u0639 \u0633\u0648\u064a\u0633\u0631\u0627 \u0641\u064a [MASK] \u0623\u0648\u0631\u0648\u0628\u0627"}, {"text": "\u0625\u0633\u0645\u064a \u0645\u062d\u0645\u062f \u0648\u0623\u0633\u0643\u0646 \u0641\u064a [MASK]."}]} | Geotrend/bert-base-15lang-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"en",
"fr",
"es",
"de",
"zh",
"ar",
"ru",
"vi",
"el",
"bg",
"th",
"tr",
"hi",
"ur",
"sw",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual",
"en",
"fr",
"es",
"de",
"zh",
"ar",
"ru",
"vi",
"el",
"bg",
"th",
"tr",
"hi",
"ur",
"sw"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #en #fr #es #de #zh #ar #ru #vi #el #bg #th #tr #hi #ur #sw #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| bert-base-15lang-cased
======================
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
The measurements below have been computed on a Google Cloud n1-standard-1 machine (1 vCPU, 3.75 GB):
Handled languages: en, fr, es, de, zh, ar, ru, vi, el, bg, th, tr, hi, ur and sw.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
How to use
----------
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
Contact
-------
Please contact amine@URL for any question, feedback or request.
| [
"### How to cite\n\n\nContact\n-------\n\n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #en #fr #es #de #zh #ar #ru #vi #el #bg #th #tr #hi #ur #sw #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to cite\n\n\nContact\n-------\n\n\nPlease contact amine@URL for any question, feedback or request."
] | [
86,
29
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #en #fr #es #de #zh #ar #ru #vi #el #bg #th #tr #hi #ur #sw #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to cite\n\n\nContact\n-------\n\n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-25lang-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
Handled languages: en, fr, es, de, zh, ar, ru, vi, el, bg, th, tr, hi, ur, sw, nl, uk, ro, pt, it, lt, no, pl, da and ja.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-25lang-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-25lang-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia", "widget": [{"text": "Google generated 46 billion [MASK] in revenue."}, {"text": "Paris is the capital of [MASK]."}, {"text": "Algiers is the largest city in [MASK]."}, {"text": "Paris est la [MASK] de la France."}, {"text": "Paris est la capitale de la [MASK]."}, {"text": "L'\u00e9lection am\u00e9ricaine a eu [MASK] en novembre 2020."}, {"text": "\u062a\u0642\u0639 \u0633\u0648\u064a\u0633\u0631\u0627 \u0641\u064a [MASK] \u0623\u0648\u0631\u0648\u0628\u0627"}, {"text": "\u0625\u0633\u0645\u064a \u0645\u062d\u0645\u062f \u0648\u0623\u0633\u0643\u0646 \u0641\u064a [MASK]."}]} | Geotrend/bert-base-25lang-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-25lang-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
Handled languages: en, fr, es, de, zh, ar, ru, vi, el, bg, th, tr, hi, ur, sw, nl, uk, ro, pt, it, lt, no, pl, da and ja.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-25lang-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nHandled languages: en, fr, es, de, zh, ar, ru, vi, el, bg, th, tr, hi, ur, sw, nl, uk, ro, pt, it, lt, no, pl, da and ja.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-25lang-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nHandled languages: en, fr, es, de, zh, ar, ru, vi, el, bg, th, tr, hi, ur, sw, nl, uk, ro, pt, it, lt, no, pl, da and ja.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
142,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-25lang-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nHandled languages: en, fr, es, de, zh, ar, ru, vi, el, bg, th, tr, hi, ur, sw, nl, uk, ro, pt, it, lt, no, pl, da and ja.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-ar-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-ar-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-ar-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
| {"language": "ar", "license": "apache-2.0", "datasets": "wikipedia", "widget": [{"text": "\u062a\u0642\u0639 \u0633\u0648\u064a\u0633\u0631\u0627 \u0641\u064a [MASK] \u0623\u0648\u0631\u0648\u0628\u0627"}, {"text": "\u0625\u0633\u0645\u064a \u0645\u062d\u0645\u062f \u0648\u0623\u0633\u0643\u0646 \u0641\u064a [MASK]."}]} | Geotrend/bert-base-ar-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"ar",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #ar #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-ar-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request.
| [
"# bert-base-ar-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #ar #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-ar-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
52,
86,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #ar #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-ar-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-bg-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-bg-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-bg-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
| {"language": "bg", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-bg-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"bg",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"bg"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #bg #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-bg-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request.
| [
"# bert-base-bg-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #bg #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-bg-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
53,
87,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #bg #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-bg-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-da-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-da-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-da-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "da", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-da-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"da",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"da"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #da #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-da-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-da-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #da #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-da-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
48,
86,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #da #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-da-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-de-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-de-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-de-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
| {"language": "de", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-de-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"de"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #de #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-de-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request.
| [
"# bert-base-de-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #de #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-de-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
52,
86,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #de #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-de-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-el-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-el-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-el-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
| {"language": "el", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-el-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"el",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"el"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #el #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-el-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request.
| [
"# bert-base-el-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #el #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-el-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
48,
86,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #el #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-el-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-ar-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-ar-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-ar-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
| {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia", "widget": [{"text": "Google generated 46 billion [MASK] in revenue."}, {"text": "Paris is the capital of [MASK]."}, {"text": "Algiers is the largest city in [MASK]."}, {"text": "\u062a\u0642\u0639 \u0633\u0648\u064a\u0633\u0631\u0627 \u0641\u064a [MASK] \u0623\u0648\u0631\u0648\u0628\u0627"}, {"text": "\u0625\u0633\u0645\u064a \u0645\u062d\u0645\u062f \u0648\u0623\u0633\u0643\u0646 \u0641\u064a [MASK]."}]} | Geotrend/bert-base-en-ar-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-ar-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request.
| [
"# bert-base-en-ar-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-ar-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
54,
88,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-ar-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-bg-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-bg-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-bg-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
| {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia", "widget": [{"text": "Google generated 46 billion [MASK] in revenue."}, {"text": "Paris is the capital of [MASK]."}, {"text": "Algiers is the largest city in [MASK]."}]} | Geotrend/bert-base-en-bg-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-bg-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request.
| [
"# bert-base-en-bg-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-bg-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
89,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-bg-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
| {"language": "en", "license": "apache-2.0", "datasets": "wikipedia", "widget": [{"text": "Google generated 46 billion [MASK] in revenue."}, {"text": "Paris is the capital of [MASK]."}, {"text": "Algiers is the largest city in [MASK]."}]} | Geotrend/bert-base-en-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #en #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request.
| [
"# bert-base-en-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #en #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
48,
86,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #en #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-da-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-da-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-da-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-da-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-da-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-da-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-da-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
88,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-da-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-de-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-de-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-de-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
| {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia", "widget": [{"text": "Google generated 46 billion [MASK] in revenue."}, {"text": "Paris is the capital of [MASK]."}, {"text": "Algiers is the largest city in [MASK]."}]} | Geotrend/bert-base-en-de-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-de-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request.
| [
"# bert-base-en-de-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-de-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
54,
88,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-de-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-el-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-el-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-el-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
| {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia", "widget": [{"text": "Google generated 46 billion [MASK] in revenue."}, {"text": "Paris is the capital of [MASK]."}, {"text": "Algiers is the largest city in [MASK]."}]} | Geotrend/bert-base-en-el-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-el-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request.
| [
"# bert-base-en-el-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-el-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
88,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-el-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-el-ru-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-el-ru-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-el-ru-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-el-ru-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-el-ru-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-el-ru-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-el-ru-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
90,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-el-ru-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-es-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-es-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-es-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
| {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia", "widget": [{"text": "Google generated 46 billion [MASK] in revenue."}, {"text": "Paris is the capital of [MASK]."}, {"text": "Algiers is the largest city in [MASK]."}]} | Geotrend/bert-base-en-es-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-es-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request.
| [
"# bert-base-en-es-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-es-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
54,
88,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-es-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-es-it-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-es-it-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-es-it-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-es-it-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-es-it-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-es-it-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-es-it-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
90,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-es-it-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-es-pt-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-es-pt-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-es-pt-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-es-pt-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-es-pt-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-es-pt-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-es-pt-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
54,
90,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-es-pt-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-es-zh-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-es-zh-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-es-zh-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-es-zh-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-es-zh-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-es-zh-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-es-zh-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
54,
91,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-es-zh-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-fr-ar-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-ar-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-ar-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-fr-ar-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-fr-ar-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-fr-ar-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-fr-ar-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
54,
90,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-fr-ar-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-fr-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
| {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia", "widget": [{"text": "Google generated 46 billion [MASK] in revenue."}, {"text": "Paris is the capital of [MASK]."}, {"text": "Algiers is the largest city in [MASK]."}, {"text": "Paris est la [MASK] de la France."}, {"text": "Paris est la capitale de la [MASK]."}, {"text": "L'\u00e9lection am\u00e9ricaine a eu [MASK] en novembre 2020."}]} | Geotrend/bert-base-en-fr-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-fr-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request.
| [
"# bert-base-en-fr-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-fr-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
54,
88,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-fr-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-fr-da-ja-vi-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-da-ja-vi-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-da-ja-vi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-fr-da-ja-vi-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-fr-da-ja-vi-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-fr-da-ja-vi-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-fr-da-ja-vi-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
94,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-fr-da-ja-vi-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-fr-de-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-de-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-de-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-fr-de-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-fr-de-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-fr-de-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-fr-de-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
54,
90,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-fr-de-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-fr-de-no-da-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-de-no-da-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-de-no-da-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-fr-de-no-da-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-fr-de-no-da-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-fr-de-no-da-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-fr-de-no-da-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
94,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-fr-de-no-da-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-fr-es-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-es-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-es-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-fr-es-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-fr-es-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-fr-es-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-fr-es-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
90,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-fr-es-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-fr-es-de-zh-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-es-de-zh-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-es-de-zh-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-fr-es-de-zh-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-fr-es-de-zh-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-fr-es-de-zh-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-fr-es-de-zh-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
95,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-fr-es-de-zh-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-fr-es-pt-it-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-es-pt-it-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-es-pt-it-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-fr-es-pt-it-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-fr-es-pt-it-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-fr-es-pt-it-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-fr-es-pt-it-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
94,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-fr-es-pt-it-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-fr-it-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-it-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-it-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-fr-it-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-fr-it-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-fr-it-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-fr-it-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
90,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-fr-it-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-fr-lt-no-pl-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-lt-no-pl-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-lt-no-pl-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-fr-lt-no-pl-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-fr-lt-no-pl-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-fr-lt-no-pl-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-fr-lt-no-pl-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
94,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-fr-lt-no-pl-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-fr-nl-ru-ar-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-nl-ru-ar-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-nl-ru-ar-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-fr-nl-ru-ar-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-fr-nl-ru-ar-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-fr-nl-ru-ar-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-fr-nl-ru-ar-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
94,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-fr-nl-ru-ar-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-fr-uk-el-ro-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-uk-el-ro-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-uk-el-ro-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-fr-uk-el-ro-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-fr-uk-el-ro-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-fr-uk-el-ro-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-fr-uk-el-ro-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
94,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-fr-uk-el-ro-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-fr-zh-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-zh-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-zh-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-fr-zh-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-fr-zh-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-fr-zh-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-fr-zh-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
91,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-fr-zh-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-fr-zh-ja-vi-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-zh-ja-vi-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-zh-ja-vi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-fr-zh-ja-vi-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-fr-zh-ja-vi-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-fr-zh-ja-vi-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-fr-zh-ja-vi-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
95,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-fr-zh-ja-vi-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-hi-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-hi-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-hi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
| {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia", "widget": [{"text": "Google generated 46 billion [MASK] in revenue."}, {"text": "Paris is the capital of [MASK]."}, {"text": "Algiers is the largest city in [MASK]."}]} | Geotrend/bert-base-en-hi-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-hi-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request.
| [
"# bert-base-en-hi-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-hi-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
88,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-hi-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-it-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-it-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-it-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-it-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-it-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-it-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-it-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
88,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-it-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-ja-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-ja-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-ja-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-ja-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-ja-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-ja-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-ja-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
88,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-ja-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-lt-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-lt-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-lt-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-lt-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-lt-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-lt-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-lt-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
50,
88,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-lt-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
fill-mask | transformers |
# bert-base-en-nl-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-nl-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-nl-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "multilingual", "license": "apache-2.0", "datasets": "wikipedia"} | Geotrend/bert-base-en-nl-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-en-nl-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
## How to use
To generate other smaller versions of multilingual transformers please visit our Github repo.
### How to cite
## Contact
Please contact amine@URL for any question, feedback or request. | [
"# bert-base-en-nl-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-en-nl-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.",
"## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.",
"### How to cite",
"## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] | [
54,
88,
24,
6,
18
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# bert-base-en-nl-cased\n\nWe are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.\n\nUnlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.\n\n\nFor more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.## How to use\n\n\n\nTo generate other smaller versions of multilingual transformers please visit our Github repo.### How to cite## Contact \n\nPlease contact amine@URL for any question, feedback or request."
] |
Subsets and Splits