pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-greek
This model was trained from scratch on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4823
- Wer: 0.3338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0106 | 1.72 | 200 | 0.5519 | 0.3537 |
| 0.0249 | 3.45 | 400 | 0.5174 | 0.3465 |
| 0.0206 | 5.17 | 600 | 0.4721 | 0.3323 |
| 0.0221 | 6.89 | 800 | 0.4652 | 0.3373 |
| 0.0204 | 8.62 | 1000 | 0.4883 | 0.3389 |
| 0.0192 | 10.34 | 1200 | 0.4785 | 0.3389 |
| 0.0186 | 12.07 | 1400 | 0.4789 | 0.3378 |
| 0.0172 | 13.79 | 1600 | 0.4915 | 0.3347 |
| 0.0184 | 15.52 | 1800 | 0.4759 | 0.3440 |
| 0.0168 | 17.24 | 2000 | 0.4891 | 0.3371 |
| 0.0155 | 18.96 | 2200 | 0.4928 | 0.3394 |
| 0.0146 | 20.69 | 2400 | 0.4834 | 0.3357 |
| 0.0146 | 22.41 | 2600 | 0.4814 | 0.3362 |
| 0.0151 | 24.14 | 2800 | 0.4791 | 0.3345 |
| 0.0136 | 25.86 | 3000 | 0.4825 | 0.3356 |
| 0.0136 | 27.58 | 3200 | 0.4850 | 0.3351 |
| 0.0127 | 29.31 | 3400 | 0.4823 | 0.3338 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-greek", "results": []}]} | jerrychatz/wav2vec2-large-xls-r-300m-greek | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-greek
===============================
This model was trained from scratch on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4823
* Wer: 0.3338
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] | [
46,
135,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-german-t5-prompted-germanquad
eval_loss = 0.5907255411148071
eval_rouge1 = 62.0922
eval_rouge2 = 47.2761
eval_rougeL = 61.7706
eval_rougeLsum = 61.8036
eval_runtime = 4501.8065
eval_samples_per_second = 5.487
eval_steps_per_second = 2.743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.18.0
- Tokenizers 0.11.0
| {"tags": ["generated_from_trainer"], "widget": [{"text": "Philipp ist 26 Jahre alt und lebt in N\u00fcrnberg, Deutschland. Derzeit arbeitet er als Machine Learning Engineer und Tech Lead bei Hugging Face, um k\u00fcnstliche Intelligenz durch Open Source und Open Science zu demokratisieren.\n\nWelches Ziel hat Hugging Face?\n"}], "model-index": [{"name": "test-german-t5-prompted-germanquad", "results": []}]} | GermanT5/german-t5-oscar-ep1-prompted-germanquad | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# test-german-t5-prompted-germanquad
eval_loss = 0.5907255411148071
eval_rouge1 = 62.0922
eval_rouge2 = 47.2761
eval_rougeL = 61.7706
eval_rougeLsum = 61.8036
eval_runtime = 4501.8065
eval_samples_per_second = 5.487
eval_steps_per_second = 2.743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.18.0
- Tokenizers 0.11.0
| [
"# test-german-t5-prompted-germanquad\n\neval_loss = 0.5907255411148071 \neval_rouge1 = 62.0922 \neval_rouge2 = 47.2761 \neval_rougeL = 61.7706 \neval_rougeLsum = 61.8036 \neval_runtime = 4501.8065 \neval_samples_per_second = 5.487 \neval_steps_per_second = 2.743",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5.6e-05\n- train_batch_size: 4\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.9.1+cu102\n- Datasets 1.18.0\n- Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# test-german-t5-prompted-germanquad\n\neval_loss = 0.5907255411148071 \neval_rouge1 = 62.0922 \neval_rouge2 = 47.2761 \neval_rougeL = 61.7706 \neval_rougeLsum = 61.8036 \neval_runtime = 4501.8065 \neval_samples_per_second = 5.487 \neval_steps_per_second = 2.743",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5.6e-05\n- train_batch_size: 4\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.9.1+cu102\n- Datasets 1.18.0\n- Tokenizers 0.11.0"
] | [
50,
107,
7,
9,
9,
4,
95,
47
] | [
"TAGS\n#transformers #pytorch #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# test-german-t5-prompted-germanquad\n\neval_loss = 0.5907255411148071 \neval_rouge1 = 62.0922 \neval_rouge2 = 47.2761 \neval_rougeL = 61.7706 \neval_rougeLsum = 61.8036 \neval_runtime = 4501.8065 \neval_samples_per_second = 5.487 \neval_steps_per_second = 2.743## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5.6e-05\n- train_batch_size: 4\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.9.1+cu102\n- Datasets 1.18.0\n- Tokenizers 0.11.0"
] |
text-classification | transformers |
## Dutch Fine-Tuned BERT For Passive/Active Voice Classification.
### Lijdende en Bedrijvende vorm classificatie voor zinnen
#### Examples
Try the following examples in the Hosted inference API:
1. Jan werd opgehaald door zijn moeder.
2. Wie niet weg is, is gezien
3. Ik ben van plan om morgen te gaan werken
4. De makelaar heeft het nieuwe huis verkocht aan de bewoners die iets verderop wonen.
5. De koekjes die mama had gemaakt waren door de jongens allemaal opgegeten.
LABEL_0 = Active / Bedrijvend. LABEL_1 = Passive / Lijdend
Answers (what they should be):
1. 1
2. 1
3. 0
4. 0
5. 1
#### Basic Information
This model is fine-tuned on [BERTje](https://huggingface.co/GroNLP/bert-base-dutch-cased) for recognizing passive and active voice in Dutch sentences.
Contact me at [email protected] for further questions.
Gerwin | {"language": ["nl"], "license": "apache-2.0", "tags": ["bert", "passive", "active"]} | Gerwin/bert-for-pac | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"passive",
"active",
"nl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #bert #text-classification #passive #active #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
## Dutch Fine-Tuned BERT For Passive/Active Voice Classification.
### Lijdende en Bedrijvende vorm classificatie voor zinnen
#### Examples
Try the following examples in the Hosted inference API:
1. Jan werd opgehaald door zijn moeder.
2. Wie niet weg is, is gezien
3. Ik ben van plan om morgen te gaan werken
4. De makelaar heeft het nieuwe huis verkocht aan de bewoners die iets verderop wonen.
5. De koekjes die mama had gemaakt waren door de jongens allemaal opgegeten.
LABEL_0 = Active / Bedrijvend. LABEL_1 = Passive / Lijdend
Answers (what they should be):
1. 1
2. 1
3. 0
4. 0
5. 1
#### Basic Information
This model is fine-tuned on BERTje for recognizing passive and active voice in Dutch sentences.
Contact me at gerwindekruijf@URL for further questions.
Gerwin | [
"## Dutch Fine-Tuned BERT For Passive/Active Voice Classification.",
"### Lijdende en Bedrijvende vorm classificatie voor zinnen",
"#### Examples\nTry the following examples in the Hosted inference API:\n1. Jan werd opgehaald door zijn moeder.\n2. Wie niet weg is, is gezien\n3. Ik ben van plan om morgen te gaan werken\n4. De makelaar heeft het nieuwe huis verkocht aan de bewoners die iets verderop wonen.\n5. De koekjes die mama had gemaakt waren door de jongens allemaal opgegeten.\n\nLABEL_0 = Active / Bedrijvend. LABEL_1 = Passive / Lijdend\n\nAnswers (what they should be): \n1. 1\n2. 1\n3. 0\n4. 0\n5. 1",
"#### Basic Information\nThis model is fine-tuned on BERTje for recognizing passive and active voice in Dutch sentences. \n\nContact me at gerwindekruijf@URL for further questions.\n\n\nGerwin"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #passive #active #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## Dutch Fine-Tuned BERT For Passive/Active Voice Classification.",
"### Lijdende en Bedrijvende vorm classificatie voor zinnen",
"#### Examples\nTry the following examples in the Hosted inference API:\n1. Jan werd opgehaald door zijn moeder.\n2. Wie niet weg is, is gezien\n3. Ik ben van plan om morgen te gaan werken\n4. De makelaar heeft het nieuwe huis verkocht aan de bewoners die iets verderop wonen.\n5. De koekjes die mama had gemaakt waren door de jongens allemaal opgegeten.\n\nLABEL_0 = Active / Bedrijvend. LABEL_1 = Passive / Lijdend\n\nAnswers (what they should be): \n1. 1\n2. 1\n3. 0\n4. 0\n5. 1",
"#### Basic Information\nThis model is fine-tuned on BERTje for recognizing passive and active voice in Dutch sentences. \n\nContact me at gerwindekruijf@URL for further questions.\n\n\nGerwin"
] | [
42,
14,
24,
164,
43
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #passive #active #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n## Dutch Fine-Tuned BERT For Passive/Active Voice Classification.### Lijdende en Bedrijvende vorm classificatie voor zinnen#### Examples\nTry the following examples in the Hosted inference API:\n1. Jan werd opgehaald door zijn moeder.\n2. Wie niet weg is, is gezien\n3. Ik ben van plan om morgen te gaan werken\n4. De makelaar heeft het nieuwe huis verkocht aan de bewoners die iets verderop wonen.\n5. De koekjes die mama had gemaakt waren door de jongens allemaal opgegeten.\n\nLABEL_0 = Active / Bedrijvend. LABEL_1 = Passive / Lijdend\n\nAnswers (what they should be): \n1. 1\n2. 1\n3. 0\n4. 0\n5. 1#### Basic Information\nThis model is fine-tuned on BERTje for recognizing passive and active voice in Dutch sentences. \n\nContact me at gerwindekruijf@URL for further questions.\n\n\nGerwin"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9161
- Mae: 0.4634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1217 | 1.0 | 235 | 0.9396 | 0.4878 |
| 0.9574 | 2.0 | 470 | 0.9161 | 0.4634 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc-en", "results": []}]} | Giannipinelli/xlm-roberta-base-finetuned-marc-en | null | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us
| xlm-roberta-base-finetuned-marc-en
==================================
This model is a fine-tuned version of xlm-roberta-base on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9161
* Mae: 0.4634
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.14.1
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
53,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2### Training results### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition | transformers | # Wav2Vec2-Large-XLSR-Indonesian
Fine-tuned: facebook/wav2vec2-large-xlsr-53 | {} | Gigworks/ASR_id | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us
| # Wav2Vec2-Large-XLSR-Indonesian
Fine-tuned: facebook/wav2vec2-large-xlsr-53 | [
"# Wav2Vec2-Large-XLSR-Indonesian\r\n\r\nFine-tuned: facebook/wav2vec2-large-xlsr-53"
] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-Indonesian\r\n\r\nFine-tuned: facebook/wav2vec2-large-xlsr-53"
] | [
32,
33
] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us \n# Wav2Vec2-Large-XLSR-Indonesian\r\n\r\nFine-tuned: facebook/wav2vec2-large-xlsr-53"
] |
null | null | <b>Speech-To-Text Chinese Model</b>
<br/><br/>
Reference: <br/>
Model - https://huggingface.co/espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char <br/>
Code - https://huggingface.co/spaces/akhaliq/espnet2_asr/blob/main/app.py
| {} | Gigworks/ASR_zh_espnet2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| <b>Speech-To-Text Chinese Model</b>
<br/><br/>
Reference: <br/>
Model - URL <br/>
Code - URL
| [] | [
"TAGS\n#region-us \n"
] | [
5
] | [
"TAGS\n#region-us \n"
] |
feature-extraction | transformers | # FongBERT
FongBERT is a BERT model trained on 68.363 sentences in [Fon](https://en.wikipedia.org/wiki/Fon_language). The data are compiled from [JW300](https://opus.nlpl.eu/JW300.php) and other additional data I scraped from the [JW](https://www.jw.org/en/) website.
It is the first pretrained model to leverage transfer learning for downtream tasks for Fon.
Below are some examples of missing word prediction.
from transformers import AutoTokenizer, AutoModelForMaskedLM
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Gilles/FongBERT")
model = AutoModelForMaskedLM.from_pretrained("Gilles/FongBERT")
fill = pipeline('fill-mask', model=model, tokenizer=tokenizer)
#### Example 1
**Sentence 1**: un tuùn ɖɔ un jló na wazɔ̌ nú we . **Translation**: I know I have to work for you.
**Masked Sentence**: un tuùn ɖɔ un jló na wazɔ̌ <"mask"> we . **Translation**: I know I have to work <"mask"> you.
fill(f'un tuùn ɖɔ un jló na wazɔ̌ {fill.tokenizer.mask_token} we')
[{'score': 0.994536280632019,
'sequence': 'un tuùn ɖɔ un jló na wazɔ̌ nú we',
'token': 312,
'token_str': ' nú'},
{'score': 0.0015309195732697845,
'sequence': 'un tuùn ɖɔ un jló na wazɔ̌nu we',
...........]
#### Example 2
**Sentence 2**: un yi wan nu we ɖesu . **Translation**: I love you so much.
**Masked Sentence**: un yi <"mask"> nu we ɖesu . **Translation**: I <"mask"> you so much.
[{'score': 0.31483960151672363,
'sequence': 'un yi wan nu we ɖesu',
'token': 639,
'token_str': ' wan'},
{'score': 0.20940221846103668,
'sequence': 'un yi ba nu we ɖesu',
...........]
#### Example 3
**Sentence 3**: un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú táan ɖé . **Translation**: I went to my boyfriend for a while.
**Masked Sentence**: un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú <"mask"> ɖé . **Translation**: I went to my boyfriend for a <"mask">.
[{'score': 0.934298574924469,
'sequence': 'un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú táan ɖé',
'token': 1102,
'token_str': ' táan'},
{'score': 0.03750855475664139,
'sequence': 'un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú ganxixo ɖé',
...........]
| {} | Gilles/FongBERT | null | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #feature-extraction #endpoints_compatible #region-us
| # FongBERT
FongBERT is a BERT model trained on 68.363 sentences in Fon. The data are compiled from JW300 and other additional data I scraped from the JW website.
It is the first pretrained model to leverage transfer learning for downtream tasks for Fon.
Below are some examples of missing word prediction.
from transformers import AutoTokenizer, AutoModelForMaskedLM
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Gilles/FongBERT")
model = AutoModelForMaskedLM.from_pretrained("Gilles/FongBERT")
fill = pipeline('fill-mask', model=model, tokenizer=tokenizer)
#### Example 1
Sentence 1: un tuùn ɖɔ un jló na wazɔ̌ nú we . Translation: I know I have to work for you.
Masked Sentence: un tuùn ɖɔ un jló na wazɔ̌ <"mask"> we . Translation: I know I have to work <"mask"> you.
fill(f'un tuùn ɖɔ un jló na wazɔ̌ {fill.tokenizer.mask_token} we')
[{'score': 0.994536280632019,
'sequence': 'un tuùn ɖɔ un jló na wazɔ̌ nú we',
'token': 312,
'token_str': ' nú'},
{'score': 0.0015309195732697845,
'sequence': 'un tuùn ɖɔ un jló na wazɔ̌nu we',
...........]
#### Example 2
Sentence 2: un yi wan nu we ɖesu . Translation: I love you so much.
Masked Sentence: un yi <"mask"> nu we ɖesu . Translation: I <"mask"> you so much.
[{'score': 0.31483960151672363,
'sequence': 'un yi wan nu we ɖesu',
'token': 639,
'token_str': ' wan'},
{'score': 0.20940221846103668,
'sequence': 'un yi ba nu we ɖesu',
...........]
#### Example 3
Sentence 3: un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú táan ɖé . Translation: I went to my boyfriend for a while.
Masked Sentence: un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú <"mask"> ɖé . Translation: I went to my boyfriend for a <"mask">.
[{'score': 0.934298574924469,
'sequence': 'un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú táan ɖé',
'token': 1102,
'token_str': ' táan'},
{'score': 0.03750855475664139,
'sequence': 'un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú ganxixo ɖé',
...........]
| [
"# FongBERT\n\nFongBERT is a BERT model trained on 68.363 sentences in Fon. The data are compiled from JW300 and other additional data I scraped from the JW website.\nIt is the first pretrained model to leverage transfer learning for downtream tasks for Fon.\nBelow are some examples of missing word prediction.\n\n\nfrom transformers import AutoTokenizer, AutoModelForMaskedLM\nfrom transformers import pipeline\n \ntokenizer = AutoTokenizer.from_pretrained(\"Gilles/FongBERT\")\n\nmodel = AutoModelForMaskedLM.from_pretrained(\"Gilles/FongBERT\")\n\n\nfill = pipeline('fill-mask', model=model, tokenizer=tokenizer)",
"#### Example 1\n\nSentence 1: un tuùn ɖɔ un jló na wazɔ̌ nú we . Translation: I know I have to work for you.\n\nMasked Sentence: un tuùn ɖɔ un jló na wazɔ̌ <\"mask\"> we . Translation: I know I have to work <\"mask\"> you.\n\nfill(f'un tuùn ɖɔ un jló na wazɔ̌ {fill.tokenizer.mask_token} we')\n\n[{'score': 0.994536280632019,\n 'sequence': 'un tuùn ɖɔ un jló na wazɔ̌ nú we',\n 'token': 312,\n 'token_str': ' nú'},\n {'score': 0.0015309195732697845,\n 'sequence': 'un tuùn ɖɔ un jló na wazɔ̌nu we',\n...........]",
"#### Example 2\n\nSentence 2: un yi wan nu we ɖesu . Translation: I love you so much.\n\nMasked Sentence: un yi <\"mask\"> nu we ɖesu . Translation: I <\"mask\"> you so much.\n\n[{'score': 0.31483960151672363,\n 'sequence': 'un yi wan nu we ɖesu',\n 'token': 639,\n 'token_str': ' wan'},\n {'score': 0.20940221846103668,\n 'sequence': 'un yi ba nu we ɖesu',\n ...........]",
"#### Example 3\n\nSentence 3: un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú táan ɖé . Translation: I went to my boyfriend for a while.\n\nMasked Sentence: un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú <\"mask\"> ɖé . Translation: I went to my boyfriend for a <\"mask\">.\n\n [{'score': 0.934298574924469,\n 'sequence': 'un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú táan ɖé',\n 'token': 1102,\n 'token_str': ' táan'},\n {'score': 0.03750855475664139,\n 'sequence': 'un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú ganxixo ɖé',\n ...........]"
] | [
"TAGS\n#transformers #pytorch #roberta #feature-extraction #endpoints_compatible #region-us \n",
"# FongBERT\n\nFongBERT is a BERT model trained on 68.363 sentences in Fon. The data are compiled from JW300 and other additional data I scraped from the JW website.\nIt is the first pretrained model to leverage transfer learning for downtream tasks for Fon.\nBelow are some examples of missing word prediction.\n\n\nfrom transformers import AutoTokenizer, AutoModelForMaskedLM\nfrom transformers import pipeline\n \ntokenizer = AutoTokenizer.from_pretrained(\"Gilles/FongBERT\")\n\nmodel = AutoModelForMaskedLM.from_pretrained(\"Gilles/FongBERT\")\n\n\nfill = pipeline('fill-mask', model=model, tokenizer=tokenizer)",
"#### Example 1\n\nSentence 1: un tuùn ɖɔ un jló na wazɔ̌ nú we . Translation: I know I have to work for you.\n\nMasked Sentence: un tuùn ɖɔ un jló na wazɔ̌ <\"mask\"> we . Translation: I know I have to work <\"mask\"> you.\n\nfill(f'un tuùn ɖɔ un jló na wazɔ̌ {fill.tokenizer.mask_token} we')\n\n[{'score': 0.994536280632019,\n 'sequence': 'un tuùn ɖɔ un jló na wazɔ̌ nú we',\n 'token': 312,\n 'token_str': ' nú'},\n {'score': 0.0015309195732697845,\n 'sequence': 'un tuùn ɖɔ un jló na wazɔ̌nu we',\n...........]",
"#### Example 2\n\nSentence 2: un yi wan nu we ɖesu . Translation: I love you so much.\n\nMasked Sentence: un yi <\"mask\"> nu we ɖesu . Translation: I <\"mask\"> you so much.\n\n[{'score': 0.31483960151672363,\n 'sequence': 'un yi wan nu we ɖesu',\n 'token': 639,\n 'token_str': ' wan'},\n {'score': 0.20940221846103668,\n 'sequence': 'un yi ba nu we ɖesu',\n ...........]",
"#### Example 3\n\nSentence 3: un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú táan ɖé . Translation: I went to my boyfriend for a while.\n\nMasked Sentence: un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú <\"mask\"> ɖé . Translation: I went to my boyfriend for a <\"mask\">.\n\n [{'score': 0.934298574924469,\n 'sequence': 'un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú táan ɖé',\n 'token': 1102,\n 'token_str': ' táan'},\n {'score': 0.03750855475664139,\n 'sequence': 'un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú ganxixo ɖé',\n ...........]"
] | [
23,
157,
206,
143,
202
] | [
"TAGS\n#transformers #pytorch #roberta #feature-extraction #endpoints_compatible #region-us \n# FongBERT\n\nFongBERT is a BERT model trained on 68.363 sentences in Fon. The data are compiled from JW300 and other additional data I scraped from the JW website.\nIt is the first pretrained model to leverage transfer learning for downtream tasks for Fon.\nBelow are some examples of missing word prediction.\n\n\nfrom transformers import AutoTokenizer, AutoModelForMaskedLM\nfrom transformers import pipeline\n \ntokenizer = AutoTokenizer.from_pretrained(\"Gilles/FongBERT\")\n\nmodel = AutoModelForMaskedLM.from_pretrained(\"Gilles/FongBERT\")\n\n\nfill = pipeline('fill-mask', model=model, tokenizer=tokenizer)#### Example 1\n\nSentence 1: un tuùn ɖɔ un jló na wazɔ̌ nú we . Translation: I know I have to work for you.\n\nMasked Sentence: un tuùn ɖɔ un jló na wazɔ̌ <\"mask\"> we . Translation: I know I have to work <\"mask\"> you.\n\nfill(f'un tuùn ɖɔ un jló na wazɔ̌ {fill.tokenizer.mask_token} we')\n\n[{'score': 0.994536280632019,\n 'sequence': 'un tuùn ɖɔ un jló na wazɔ̌ nú we',\n 'token': 312,\n 'token_str': ' nú'},\n {'score': 0.0015309195732697845,\n 'sequence': 'un tuùn ɖɔ un jló na wazɔ̌nu we',\n...........]#### Example 2\n\nSentence 2: un yi wan nu we ɖesu . Translation: I love you so much.\n\nMasked Sentence: un yi <\"mask\"> nu we ɖesu . Translation: I <\"mask\"> you so much.\n\n[{'score': 0.31483960151672363,\n 'sequence': 'un yi wan nu we ɖesu',\n 'token': 639,\n 'token_str': ' wan'},\n {'score': 0.20940221846103668,\n 'sequence': 'un yi ba nu we ɖesu',\n ...........]#### Example 3\n\nSentence 3: un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú táan ɖé . Translation: I went to my boyfriend for a while.\n\nMasked Sentence: un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú <\"mask\"> ɖé . Translation: I went to my boyfriend for a <\"mask\">.\n\n [{'score': 0.934298574924469,\n 'sequence': 'un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú táan ɖé',\n 'token': 1102,\n 'token_str': ' táan'},\n {'score': 0.03750855475664139,\n 'sequence': 'un yì cí sunnu xɔ́ntɔn ce Tony gɔ́n nú ganxixo ɖé',\n ...........]"
] |
image-classification | transformers |
# places
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Beach

#### City

#### Forest
 | {"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]} | Giuliano/places | null | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# places
Autogenerated by HuggingPics️
Create your own image classifier for anything by running the demo on Google Colab.
Report any issues with the demo at the github repo.
## Example Images
#### Beach
!Beach
#### City
!City
#### Forest
!Forest | [
"# places\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### Beach\n\n!Beach",
"#### City\n\n!City",
"#### Forest\n\n!Forest"
] | [
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# places\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### Beach\n\n!Beach",
"#### City\n\n!City",
"#### Forest\n\n!Forest"
] | [
40,
40,
4,
7,
7,
7
] | [
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n# places\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.## Example Images#### Beach\n\n!Beach#### City\n\n!City#### Forest\n\n!Forest"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mandarin
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "Mandarin", "results": []}]} | GleamEyeBeast/Mandarin | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
# Mandarin
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| [
"# Mandarin\n\nThis model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Mandarin\n\nThis model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] | [
54,
34,
7,
9,
9,
4,
106,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n# Mandarin\n\nThis model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common_voice dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 1### Training results### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mandarin_naive
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4584
- Wer: 0.3999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8963 | 3.67 | 400 | 1.0645 | 0.8783 |
| 0.5506 | 7.34 | 800 | 0.5032 | 0.5389 |
| 0.2111 | 11.01 | 1200 | 0.4765 | 0.4712 |
| 0.1336 | 14.68 | 1600 | 0.4815 | 0.4511 |
| 0.0974 | 18.35 | 2000 | 0.4956 | 0.4370 |
| 0.0748 | 22.02 | 2400 | 0.4881 | 0.4235 |
| 0.0584 | 25.69 | 2800 | 0.4732 | 0.4193 |
| 0.0458 | 29.36 | 3200 | 0.4584 | 0.3999 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "Mandarin_naive", "results": []}]} | GleamEyeBeast/Mandarin_naive | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
| Mandarin\_naive
===============
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4584
* Wer: 0.3999
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
54,
151,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1761
- Wer: 0.2161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.5828 | 4.0 | 500 | 3.0263 | 1.0 |
| 1.8657 | 8.0 | 1000 | 0.2213 | 0.2650 |
| 0.332 | 12.0 | 1500 | 0.2095 | 0.2413 |
| 0.2037 | 16.0 | 2000 | 0.1906 | 0.2222 |
| 0.1282 | 20.0 | 2500 | 0.1761 | 0.2161 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "test", "results": []}]} | GleamEyeBeast/test | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| test
====
This model is a fine-tuned version of facebook/wav2vec2-base-960h on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1761
* Wer: 0.2161
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 20
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] | [
47,
128,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1373
- F1: 0.8630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2663 | 1.0 | 525 | 0.1712 | 0.8158 |
| 0.1329 | 2.0 | 1050 | 0.1421 | 0.8483 |
| 0.0846 | 3.0 | 1575 | 0.1373 | 0.8630 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu116
- Datasets 2.3.2
- Tokenizers 0.12.1
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["xtreme"], "metrics": ["f1"], "model-index": [{"name": "xlm-roberta-base-finetuned-panx-de", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "xtreme", "type": "xtreme", "args": "PAN-X.de"}, "metrics": [{"type": "f1", "value": 0.8629840546697038, "name": "F1"}]}]}]} | Gonalb/xlm-roberta-base-finetuned-panx-de | null | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #xlm-roberta #token-classification #generated_from_trainer #dataset-xtreme #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
| xlm-roberta-base-finetuned-panx-de
==================================
This model is a fine-tuned version of xlm-roberta-base on the xtreme dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1373
* F1: 0.8630
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 24
* eval\_batch\_size: 24
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.20.1
* Pytorch 1.12.0+cu116
* Datasets 2.3.2
* Tokenizers 0.12.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.20.1\n* Pytorch 1.12.0+cu116\n* Datasets 2.3.2\n* Tokenizers 0.12.1"
] | [
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #token-classification #generated_from_trainer #dataset-xtreme #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.20.1\n* Pytorch 1.12.0+cu116\n* Datasets 2.3.2\n* Tokenizers 0.12.1"
] | [
55,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #token-classification #generated_from_trainer #dataset-xtreme #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.20.1\n* Pytorch 1.12.0+cu116\n* Datasets 2.3.2\n* Tokenizers 0.12.1"
] |
text-generation | transformers |
# Jackie DialoGPT Model | {"tags": ["conversational"]} | Gowtham25/DialoGPT-small-jackie | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Jackie DialoGPT Model | [
"# Jackie DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Jackie DialoGPT Model"
] | [
39,
6
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Jackie DialoGPT Model"
] |
null | null |
# Graphcore/bart-base-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
## Intended uses & limitations
This model contains just the `IPUConfig` files for running the BART base model (e.g. [facebook/bart-base](https://huggingface.co/facebook/bart-base)) on Graphcore IPUs.
**This model contains no model weights, only an IPUConfig.**
## Usage
```
from optimum.graphcore import IPUConfig
ipu_config = IPUConfig.from_pretrained("Graphcore/bart-base-ipu")
``` | {"license": "apache-2.0"} | Graphcore/bart-base-ipu | null | [
"optimum_graphcore",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#optimum_graphcore #license-apache-2.0 #region-us
|
# Graphcore/bart-base-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
## Intended uses & limitations
This model contains just the 'IPUConfig' files for running the BART base model (e.g. facebook/bart-base) on Graphcore IPUs.
This model contains no model weights, only an IPUConfig.
## Usage
| [
"# Graphcore/bart-base-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\n\nBART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.\n\nBART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).",
"## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the BART base model (e.g. facebook/bart-base) on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.",
"## Usage"
] | [
"TAGS\n#optimum_graphcore #license-apache-2.0 #region-us \n",
"# Graphcore/bart-base-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\n\nBART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.\n\nBART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).",
"## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the BART base model (e.g. facebook/bart-base) on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.",
"## Usage"
] | [
19,
197,
129,
57,
3
] | [
"TAGS\n#optimum_graphcore #license-apache-2.0 #region-us \n# Graphcore/bart-base-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.## Model description\n\nBART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.\n\nBART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the BART base model (e.g. facebook/bart-base) on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.## Usage"
] |
null | null | # Graphcore/bert-base-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM.
It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.
## Intended uses & limitations
This model contains just the `IPUConfig` files for running the BERT base model (e.g. [bert-base-uncased](https://huggingface.co/bert-base-uncased) or [bert-base-cased](https://huggingface.co/bert-base-cased)) on Graphcore IPUs.
**This model contains no model weights, only an IPUConfig.**
## Usage
```
from optimum.graphcore import IPUConfig
ipu_config = IPUConfig.from_pretrained("Graphcore/bert-base-ipu")
``` | {} | Graphcore/bert-base-ipu | null | [
"optimum_graphcore",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#optimum_graphcore #region-us
| # Graphcore/bert-base-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM.
It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.
## Intended uses & limitations
This model contains just the 'IPUConfig' files for running the BERT base model (e.g. bert-base-uncased or bert-base-cased) on Graphcore IPUs.
This model contains no model weights, only an IPUConfig.
## Usage
| [
"# Graphcore/bert-base-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\n\nBERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. \n\nIt was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.\n\nIt reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.",
"## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the BERT base model (e.g. bert-base-uncased or bert-base-cased) on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.",
"## Usage"
] | [
"TAGS\n#optimum_graphcore #region-us \n",
"# Graphcore/bert-base-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\n\nBERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. \n\nIt was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.\n\nIt reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.",
"## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the BERT base model (e.g. bert-base-uncased or bert-base-cased) on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.",
"## Usage"
] | [
11,
197,
189,
65,
3
] | [
"TAGS\n#optimum_graphcore #region-us \n# Graphcore/bert-base-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.## Model description\n\nBERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. \n\nIt was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.\n\nIt reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the BERT base model (e.g. bert-base-uncased or bert-base-cased) on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.## Usage"
] |
null | null | # Graphcore/bert-large-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM.
It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.
# Intended uses & limitations
This model contains just the `IPUConfig` files for running the BERT large model (e.g. [bert-large-uncased](https://huggingface.co/bert-large-uncased) or [bert-large-cased](https://huggingface.co/bert-large-cased)) on Graphcore IPUs.
**This model contains no model weights, only an IPUConfig.**
## Usage
```
from optimum.graphcore import IPUConfig
ipu_config = IPUConfig.from_pretrained("Graphcore/bert-large-ipu")
``` | {} | Graphcore/bert-large-ipu | null | [
"optimum_graphcore",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#optimum_graphcore #region-us
| # Graphcore/bert-large-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM.
It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.
# Intended uses & limitations
This model contains just the 'IPUConfig' files for running the BERT large model (e.g. bert-large-uncased or bert-large-cased) on Graphcore IPUs.
This model contains no model weights, only an IPUConfig.
## Usage
| [
"# Graphcore/bert-large-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\n\nBERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. \n\nIt was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.\n\nIt reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.",
"# Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the BERT large model (e.g. bert-large-uncased or bert-large-cased) on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.",
"## Usage"
] | [
"TAGS\n#optimum_graphcore #region-us \n",
"# Graphcore/bert-large-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\n\nBERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. \n\nIt was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.\n\nIt reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.",
"# Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the BERT large model (e.g. bert-large-uncased or bert-large-cased) on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.",
"## Usage"
] | [
11,
197,
189,
64,
3
] | [
"TAGS\n#optimum_graphcore #region-us \n# Graphcore/bert-large-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.## Model description\n\nBERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. \n\nIt was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.\n\nIt reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.# Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the BERT large model (e.g. bert-large-uncased or bert-large-cased) on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.## Usage"
] |
question-answering | transformers |
# Graphcore/bert-large-uncased-squad
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM.
It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.
## Intended uses & limitations
This model is a fine-tuned version of [Graphcore/bert-large-uncased](https://huggingface.co/Graphcore/bert-large-uncased) on the SQuAD dataset.
## Training and evaluation data
Trained on SQuAD dataset:
- [HuggingFace/squad](https://huggingface.co/datasets/squad)
## Training procedure
Model was trained on 16 Graphcore Mk2 IPUs using the [optimum-graphcore](https://github.com/huggingface/optimum-graphcore) library.
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "Graphcore/bert-large-uncased-squad", "results": []}]} | Graphcore/bert-large-uncased-squad | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# Graphcore/bert-large-uncased-squad
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM.
It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.
## Intended uses & limitations
This model is a fine-tuned version of Graphcore/bert-large-uncased on the SQuAD dataset.
## Training and evaluation data
Trained on SQuAD dataset:
- HuggingFace/squad
## Training procedure
Model was trained on 16 Graphcore Mk2 IPUs using the optimum-graphcore library.
| [
"# Graphcore/bert-large-uncased-squad\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\n\nBERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. \n\nIt was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.\n\nIt reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.",
"## Intended uses & limitations\n\nThis model is a fine-tuned version of Graphcore/bert-large-uncased on the SQuAD dataset.",
"## Training and evaluation data\nTrained on SQuAD dataset:\n- HuggingFace/squad",
"## Training procedure\n\nModel was trained on 16 Graphcore Mk2 IPUs using the optimum-graphcore library."
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Graphcore/bert-large-uncased-squad\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\n\nBERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. \n\nIt was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.\n\nIt reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.",
"## Intended uses & limitations\n\nThis model is a fine-tuned version of Graphcore/bert-large-uncased on the SQuAD dataset.",
"## Training and evaluation data\nTrained on SQuAD dataset:\n- HuggingFace/squad",
"## Training procedure\n\nModel was trained on 16 Graphcore Mk2 IPUs using the optimum-graphcore library."
] | [
46,
199,
189,
30,
17,
24
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n# Graphcore/bert-large-uncased-squad\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.## Model description\n\nBERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. \n\nIt was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.\n\nIt reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.## Intended uses & limitations\n\nThis model is a fine-tuned version of Graphcore/bert-large-uncased on the SQuAD dataset.## Training and evaluation data\nTrained on SQuAD dataset:\n- HuggingFace/squad## Training procedure\n\nModel was trained on 16 Graphcore Mk2 IPUs using the optimum-graphcore library."
] |
null | transformers |
# Graphcore/bert-large-uncased
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM.
It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.
## Intended uses & limitations
This model is a pre-trained BERT-Large trained in two phases on the [Graphcore/wikipedia-bert-128](https://huggingface.co/datasets/Graphcore/wikipedia-bert-128) and [Graphcore/wikipedia-bert-512](https://huggingface.co/datasets/Graphcore/wikipedia-bert-512) datasets.
## Training and evaluation data
Trained on wikipedia datasets:
- [Graphcore/wikipedia-bert-128](https://huggingface.co/datasets/Graphcore/wikipedia-bert-128)
- [Graphcore/wikipedia-bert-512](https://huggingface.co/datasets/Graphcore/wikipedia-bert-512)
## Training procedure
Trained MLM and NSP pre-training scheme from [Large Batch Optimization for Deep Learning: Training BERT in 76 minutes](https://arxiv.org/abs/1904.00962).
Trained on 64 Graphcore Mk2 IPUs using [`optimum-graphcore`](https://github.com/huggingface/optimum-graphcore)
Command lines:
Phase 1:
```
python examples/language-modeling/run_pretraining.py \
--config_name bert-large-uncased \
--tokenizer_name bert-large-uncased \
--ipu_config_name Graphcore/bert-large-ipu \
--dataset_name Graphcore/wikipedia-bert-128 \
--do_train \
--logging_steps 5 \
--max_seq_length 128 \
--max_steps 10550 \
--is_already_preprocessed \
--dataloader_num_workers 64 \
--dataloader_mode async_rebatched \
--lamb \
--lamb_no_bias_correction \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 512 \
--pod_type pod64 \
--learning_rate 0.006 \
--lr_scheduler_type linear \
--loss_scaling 32768 \
--weight_decay 0.01 \
--warmup_ratio 0.28 \
--config_overrides "layer_norm_eps=0.001" \
--ipu_config_overrides "matmul_proportion=[0.14 0.19 0.19 0.19]" \
--output_dir output-pretrain-bert-large-phase1
```
Phase 2:
```
python examples/language-modeling/run_pretraining.py \
--config_name bert-large-uncased \
--tokenizer_name bert-large-uncased \
--model_name_or_path ./output-pretrain-bert-large-phase1 \
--ipu_config_name Graphcore/bert-large-ipu \
--dataset_name Graphcore/wikipedia-bert-512 \
--do_train \
--logging_steps 5 \
--max_seq_length 512 \
--max_steps 2038 \
--is_already_preprocessed \
--dataloader_num_workers 96 \
--dataloader_mode async_rebatched \
--lamb \
--lamb_no_bias_correction \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 512 \
--pod_type pod64 \
--learning_rate 0.002828 \
--lr_scheduler_type linear \
--loss_scaling 16384 \
--weight_decay 0.01 \
--warmup_ratio 0.128 \
--config_overrides "layer_norm_eps=0.001" \
--ipu_config_overrides "matmul_proportion=[0.14 0.19 0.19 0.19]" \
--output_dir output-pretrain-bert-large-phase2
```
### Training hyperparameters
The following hyperparameters were used during phase 1 training:
- learning_rate: 0.006
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 512
- total_train_batch_size: 65536
- total_eval_batch_size: 512
- optimizer: LAMB
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.28
- training_steps: 10550
- training precision: Mixed Precision
The following hyperparameters were used during phase 2 training:
- learning_rate: 0.002828
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 512
- total_train_batch_size: 16384
- total_eval_batch_size: 512
- optimizer: LAMB
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.128
- training_steps: 2038
- training precision: Mixed Precision
### Training results
```
train/epoch: 2.04
train/global_step: 2038
train/loss: 1.2002
train/train_runtime: 12022.3897
train/train_steps_per_second: 0.17
train/train_samples_per_second: 2777.367
```
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["Graphcore/wikipedia-bert-128", "Graphcore/wikipedia-bert-512"], "model-index": [{"name": "Graphcore/bert-large-uncased", "results": []}]} | Graphcore/bert-large-uncased | null | [
"transformers",
"pytorch",
"optimum_graphcore",
"bert",
"generated_from_trainer",
"dataset:Graphcore/wikipedia-bert-128",
"dataset:Graphcore/wikipedia-bert-512",
"arxiv:1904.00962",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1904.00962"
] | [] | TAGS
#transformers #pytorch #optimum_graphcore #bert #generated_from_trainer #dataset-Graphcore/wikipedia-bert-128 #dataset-Graphcore/wikipedia-bert-512 #arxiv-1904.00962 #license-apache-2.0 #endpoints_compatible #region-us
|
# Graphcore/bert-large-uncased
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM.
It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.
## Intended uses & limitations
This model is a pre-trained BERT-Large trained in two phases on the Graphcore/wikipedia-bert-128 and Graphcore/wikipedia-bert-512 datasets.
## Training and evaluation data
Trained on wikipedia datasets:
- Graphcore/wikipedia-bert-128
- Graphcore/wikipedia-bert-512
## Training procedure
Trained MLM and NSP pre-training scheme from Large Batch Optimization for Deep Learning: Training BERT in 76 minutes.
Trained on 64 Graphcore Mk2 IPUs using 'optimum-graphcore'
Command lines:
Phase 1:
Phase 2:
### Training hyperparameters
The following hyperparameters were used during phase 1 training:
- learning_rate: 0.006
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 512
- total_train_batch_size: 65536
- total_eval_batch_size: 512
- optimizer: LAMB
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.28
- training_steps: 10550
- training precision: Mixed Precision
The following hyperparameters were used during phase 2 training:
- learning_rate: 0.002828
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 512
- total_train_batch_size: 16384
- total_eval_batch_size: 512
- optimizer: LAMB
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.128
- training_steps: 2038
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
| [
"# Graphcore/bert-large-uncased\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\n\nBERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. \n\nIt was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.\n\nIt reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.",
"## Intended uses & limitations\n\nThis model is a pre-trained BERT-Large trained in two phases on the Graphcore/wikipedia-bert-128 and Graphcore/wikipedia-bert-512 datasets.",
"## Training and evaluation data\n\nTrained on wikipedia datasets:\n- Graphcore/wikipedia-bert-128\n- Graphcore/wikipedia-bert-512",
"## Training procedure\n\nTrained MLM and NSP pre-training scheme from Large Batch Optimization for Deep Learning: Training BERT in 76 minutes.\nTrained on 64 Graphcore Mk2 IPUs using 'optimum-graphcore'\n\nCommand lines:\n\nPhase 1:\n\n\nPhase 2:",
"### Training hyperparameters\n\nThe following hyperparameters were used during phase 1 training:\n- learning_rate: 0.006\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: IPU\n- gradient_accumulation_steps: 512\n- total_train_batch_size: 65536\n- total_eval_batch_size: 512\n- optimizer: LAMB\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.28\n- training_steps: 10550\n- training precision: Mixed Precision\n\nThe following hyperparameters were used during phase 2 training:\n- learning_rate: 0.002828\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: IPU\n- gradient_accumulation_steps: 512\n- total_train_batch_size: 16384\n- total_eval_batch_size: 512\n- optimizer: LAMB\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.128\n- training_steps: 2038\n- training precision: Mixed Precision",
"### Training results",
"### Framework versions\n\n- Transformers 4.17.0\n- Pytorch 1.10.0+cpu\n- Datasets 2.0.0\n- Tokenizers 0.11.6"
] | [
"TAGS\n#transformers #pytorch #optimum_graphcore #bert #generated_from_trainer #dataset-Graphcore/wikipedia-bert-128 #dataset-Graphcore/wikipedia-bert-512 #arxiv-1904.00962 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Graphcore/bert-large-uncased\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\n\nBERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. \n\nIt was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.\n\nIt reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.",
"## Intended uses & limitations\n\nThis model is a pre-trained BERT-Large trained in two phases on the Graphcore/wikipedia-bert-128 and Graphcore/wikipedia-bert-512 datasets.",
"## Training and evaluation data\n\nTrained on wikipedia datasets:\n- Graphcore/wikipedia-bert-128\n- Graphcore/wikipedia-bert-512",
"## Training procedure\n\nTrained MLM and NSP pre-training scheme from Large Batch Optimization for Deep Learning: Training BERT in 76 minutes.\nTrained on 64 Graphcore Mk2 IPUs using 'optimum-graphcore'\n\nCommand lines:\n\nPhase 1:\n\n\nPhase 2:",
"### Training hyperparameters\n\nThe following hyperparameters were used during phase 1 training:\n- learning_rate: 0.006\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: IPU\n- gradient_accumulation_steps: 512\n- total_train_batch_size: 65536\n- total_eval_batch_size: 512\n- optimizer: LAMB\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.28\n- training_steps: 10550\n- training precision: Mixed Precision\n\nThe following hyperparameters were used during phase 2 training:\n- learning_rate: 0.002828\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: IPU\n- gradient_accumulation_steps: 512\n- total_train_batch_size: 16384\n- total_eval_batch_size: 512\n- optimizer: LAMB\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.128\n- training_steps: 2038\n- training precision: Mixed Precision",
"### Training results",
"### Framework versions\n\n- Transformers 4.17.0\n- Pytorch 1.10.0+cpu\n- Datasets 2.0.0\n- Tokenizers 0.11.6"
] | [
74,
197,
189,
43,
31,
54,
258,
5,
42
] | [
"TAGS\n#transformers #pytorch #optimum_graphcore #bert #generated_from_trainer #dataset-Graphcore/wikipedia-bert-128 #dataset-Graphcore/wikipedia-bert-512 #arxiv-1904.00962 #license-apache-2.0 #endpoints_compatible #region-us \n# Graphcore/bert-large-uncased\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.## Model description\n\nBERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. \n\nIt was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.\n\nIt reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.## Intended uses & limitations\n\nThis model is a pre-trained BERT-Large trained in two phases on the Graphcore/wikipedia-bert-128 and Graphcore/wikipedia-bert-512 datasets.## Training and evaluation data\n\nTrained on wikipedia datasets:\n- Graphcore/wikipedia-bert-128\n- Graphcore/wikipedia-bert-512## Training procedure\n\nTrained MLM and NSP pre-training scheme from Large Batch Optimization for Deep Learning: Training BERT in 76 minutes.\nTrained on 64 Graphcore Mk2 IPUs using 'optimum-graphcore'\n\nCommand lines:\n\nPhase 1:\n\n\nPhase 2:### Training hyperparameters\n\nThe following hyperparameters were used during phase 1 training:\n- learning_rate: 0.006\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: IPU\n- gradient_accumulation_steps: 512\n- total_train_batch_size: 65536\n- total_eval_batch_size: 512\n- optimizer: LAMB\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.28\n- training_steps: 10550\n- training precision: Mixed Precision\n\nThe following hyperparameters were used during phase 2 training:\n- learning_rate: 0.002828\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: IPU\n- gradient_accumulation_steps: 512\n- total_train_batch_size: 16384\n- total_eval_batch_size: 512\n- optimizer: LAMB\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.128\n- training_steps: 2038\n- training precision: Mixed Precision### Training results### Framework versions\n\n- Transformers 4.17.0\n- Pytorch 1.10.0+cpu\n- Datasets 2.0.0\n- Tokenizers 0.11.6"
] |
null | null | # Graphcore/deberta-base-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
DeBERTa([Decoding-enhanced BERT with Disentangled Attention ](https://arxiv.org/abs/2006.03654 )) improves the BERT and RoBERTa models using the disentangled attention mechanism and an enhanced mask decoder which is used to replace the output softmax layer to predict the masked tokens for model pretraining.
Through two techniques, it could significantly improve the efficiency of model pre-training and performance of downstream tasks.
# Intended uses & limitations
This model contains just the `IPUConfig` files for running the DeBERTa-base model (e.g. [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base)) on Graphcore IPUs.
**This model contains no model weights, only an IPUConfig.**
## Usage
```
from optimum.graphcore import IPUConfig
ipu_config = IPUConfig.from_pretrained("Graphcore/deberta-base-ipu")
``` | {} | Graphcore/deberta-base-ipu | null | [
"optimum_graphcore",
"arxiv:2006.03654",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2006.03654"
] | [] | TAGS
#optimum_graphcore #arxiv-2006.03654 #region-us
| # Graphcore/deberta-base-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
DeBERTa(Decoding-enhanced BERT with Disentangled Attention ) improves the BERT and RoBERTa models using the disentangled attention mechanism and an enhanced mask decoder which is used to replace the output softmax layer to predict the masked tokens for model pretraining.
Through two techniques, it could significantly improve the efficiency of model pre-training and performance of downstream tasks.
# Intended uses & limitations
This model contains just the 'IPUConfig' files for running the DeBERTa-base model (e.g. microsoft/deberta-base) on Graphcore IPUs.
This model contains no model weights, only an IPUConfig.
## Usage
| [
"# Graphcore/deberta-base-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\n\nDeBERTa(Decoding-enhanced BERT with Disentangled Attention ) improves the BERT and RoBERTa models using the disentangled attention mechanism and an enhanced mask decoder which is used to replace the output softmax layer to predict the masked tokens for model pretraining. \nThrough two techniques, it could significantly improve the efficiency of model pre-training and performance of downstream tasks.",
"# Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the DeBERTa-base model (e.g. microsoft/deberta-base) on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.",
"## Usage"
] | [
"TAGS\n#optimum_graphcore #arxiv-2006.03654 #region-us \n",
"# Graphcore/deberta-base-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\n\nDeBERTa(Decoding-enhanced BERT with Disentangled Attention ) improves the BERT and RoBERTa models using the disentangled attention mechanism and an enhanced mask decoder which is used to replace the output softmax layer to predict the masked tokens for model pretraining. \nThrough two techniques, it could significantly improve the efficiency of model pre-training and performance of downstream tasks.",
"# Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the DeBERTa-base model (e.g. microsoft/deberta-base) on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.",
"## Usage"
] | [
21,
199,
81,
61,
3
] | [
"TAGS\n#optimum_graphcore #arxiv-2006.03654 #region-us \n# Graphcore/deberta-base-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.## Model description\n\nDeBERTa(Decoding-enhanced BERT with Disentangled Attention ) improves the BERT and RoBERTa models using the disentangled attention mechanism and an enhanced mask decoder which is used to replace the output softmax layer to predict the masked tokens for model pretraining. \nThrough two techniques, it could significantly improve the efficiency of model pre-training and performance of downstream tasks.# Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the DeBERTa-base model (e.g. microsoft/deberta-base) on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.## Usage"
] |
null | null |
# Graphcore/gpt2-medium-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
GPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation.
Paper link : [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf)
## Intended uses & limitations
This model contains just the `IPUConfig` files for running the [HuggingFace/gpt2-medium](https://huggingface.co/gpt2-medium) model on Graphcore IPUs.
**This model contains no model weights, only an IPUConfig.**
## Usage
```
from optimum.graphcore import IPUConfig
ipu_config = IPUConfig.from_pretrained("Graphcore/gpt2-medium-ipu")
``` | {"license": "apache-2.0"} | Graphcore/gpt2-medium-ipu | null | [
"optimum_graphcore",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#optimum_graphcore #license-apache-2.0 #region-us
|
# Graphcore/gpt2-medium-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
GPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation.
Paper link : Language Models are Unsupervised Multitask Learners
## Intended uses & limitations
This model contains just the 'IPUConfig' files for running the HuggingFace/gpt2-medium model on Graphcore IPUs.
This model contains no model weights, only an IPUConfig.
## Usage
| [
"# Graphcore/gpt2-medium-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\nGPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation. \n \nPaper link : Language Models are Unsupervised Multitask Learners",
"## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the HuggingFace/gpt2-medium model on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.",
"## Usage"
] | [
"TAGS\n#optimum_graphcore #license-apache-2.0 #region-us \n",
"# Graphcore/gpt2-medium-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\nGPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation. \n \nPaper link : Language Models are Unsupervised Multitask Learners",
"## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the HuggingFace/gpt2-medium model on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.",
"## Usage"
] | [
19,
199,
86,
52,
3
] | [
"TAGS\n#optimum_graphcore #license-apache-2.0 #region-us \n# Graphcore/gpt2-medium-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.## Model description\nGPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation. \n \nPaper link : Language Models are Unsupervised Multitask Learners## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the HuggingFace/gpt2-medium model on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.## Usage"
] |
null | null |
# Graphcore/gpt2-small-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
GPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation.
Paper link : [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf)
## Intended uses & limitations
This model contains just the `IPUConfig` files for running the [GPT2 Small](https://huggingface.co/gpt2) model on Graphcore IPUs.
**This model contains no model weights, only an IPUConfig.**
## Usage
```
from optimum.graphcore import IPUConfig
ipu_config = IPUConfig.from_pretrained("Graphcore/gpt2-small-ipu")
``` | {"license": "apache-2.0"} | Graphcore/gpt2-small-ipu | null | [
"optimum_graphcore",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#optimum_graphcore #license-apache-2.0 #region-us
|
# Graphcore/gpt2-small-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
GPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation.
Paper link : Language Models are Unsupervised Multitask Learners
## Intended uses & limitations
This model contains just the 'IPUConfig' files for running the GPT2 Small model on Graphcore IPUs.
This model contains no model weights, only an IPUConfig.
## Usage
| [
"# Graphcore/gpt2-small-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\nGPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation. \n \nPaper link : Language Models are Unsupervised Multitask Learners",
"## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the GPT2 Small model on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.",
"## Usage"
] | [
"TAGS\n#optimum_graphcore #license-apache-2.0 #region-us \n",
"# Graphcore/gpt2-small-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\nGPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation. \n \nPaper link : Language Models are Unsupervised Multitask Learners",
"## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the GPT2 Small model on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.",
"## Usage"
] | [
19,
199,
86,
48,
3
] | [
"TAGS\n#optimum_graphcore #license-apache-2.0 #region-us \n# Graphcore/gpt2-small-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.## Model description\nGPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation. \n \nPaper link : Language Models are Unsupervised Multitask Learners## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the GPT2 Small model on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.## Usage"
] |
null | null |
# Graphcore/roberta-base-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
RoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained.
It suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data.
As a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD.
Paper link : [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/pdf/1907.11692.pdf)
## Intended uses & limitations
This model contains just the `IPUConfig` files for running the [roberta-base](https://huggingface.co/roberta-base) model on Graphcore IPUs.
## Usage
```
from optimum.graphcore import IPUConfig
ipu_config = IPUConfig.from_pretrained("Graphcore/roberta-base-ipu")
``` | {"license": "apache-2.0"} | Graphcore/roberta-base-ipu | null | [
"optimum_graphcore",
"arxiv:1907.11692",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1907.11692"
] | [] | TAGS
#optimum_graphcore #arxiv-1907.11692 #license-apache-2.0 #region-us
|
# Graphcore/roberta-base-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
RoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained.
It suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data.
As a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD.
Paper link : RoBERTa: A Robustly Optimized BERT Pretraining Approach
## Intended uses & limitations
This model contains just the 'IPUConfig' files for running the roberta-base model on Graphcore IPUs.
## Usage
| [
"# Graphcore/roberta-base-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\nRoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained. \n\nIt suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data.\n\nAs a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD.\n\nPaper link : RoBERTa: A Robustly Optimized BERT Pretraining Approach",
"## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the roberta-base model on Graphcore IPUs.",
"## Usage"
] | [
"TAGS\n#optimum_graphcore #arxiv-1907.11692 #license-apache-2.0 #region-us \n",
"# Graphcore/roberta-base-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\nRoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained. \n\nIt suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data.\n\nAs a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD.\n\nPaper link : RoBERTa: A Robustly Optimized BERT Pretraining Approach",
"## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the roberta-base model on Graphcore IPUs.",
"## Usage"
] | [
29,
197,
126,
32,
3
] | [
"TAGS\n#optimum_graphcore #arxiv-1907.11692 #license-apache-2.0 #region-us \n# Graphcore/roberta-base-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.## Model description\nRoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained. \n\nIt suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data.\n\nAs a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD.\n\nPaper link : RoBERTa: A Robustly Optimized BERT Pretraining Approach## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the roberta-base model on Graphcore IPUs.## Usage"
] |
null | null | # Graphcore/roberta-large-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
RoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained.
It suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data.
As a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD.
Paper link : [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/pdf/1907.11692.pdf)
## Intended uses & limitations
This model contains just the `IPUConfig` files for running the [roberta-large](https://huggingface.co/roberta-large) model on Graphcore IPUs.
**This model contains no model weights, only an IPUConfig.**
## Usage
```
from optimum.graphcore import IPUConfig
ipu_config = IPUConfig.from_pretrained("Graphcore/roberta-large-ipu")
``` | {} | Graphcore/roberta-large-ipu | null | [
"optimum_graphcore",
"arxiv:1907.11692",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1907.11692"
] | [] | TAGS
#optimum_graphcore #arxiv-1907.11692 #region-us
| # Graphcore/roberta-large-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
RoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained.
It suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data.
As a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD.
Paper link : RoBERTa: A Robustly Optimized BERT Pretraining Approach
## Intended uses & limitations
This model contains just the 'IPUConfig' files for running the roberta-large model on Graphcore IPUs.
This model contains no model weights, only an IPUConfig.
## Usage
| [
"# Graphcore/roberta-large-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\nRoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained. \n\nIt suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data.\n\nAs a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD.\n\nPaper link : RoBERTa: A Robustly Optimized BERT Pretraining Approach",
"## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the roberta-large model on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.",
"## Usage"
] | [
"TAGS\n#optimum_graphcore #arxiv-1907.11692 #region-us \n",
"# Graphcore/roberta-large-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\nRoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained. \n\nIt suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data.\n\nAs a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD.\n\nPaper link : RoBERTa: A Robustly Optimized BERT Pretraining Approach",
"## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the roberta-large model on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.",
"## Usage"
] | [
21,
197,
126,
47,
3
] | [
"TAGS\n#optimum_graphcore #arxiv-1907.11692 #region-us \n# Graphcore/roberta-large-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.## Model description\nRoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained. \n\nIt suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data.\n\nAs a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD.\n\nPaper link : RoBERTa: A Robustly Optimized BERT Pretraining Approach## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the roberta-large model on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.## Usage"
] |
null | null |
# Graphcore/t5-small-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
Text-to-Text Transfer Transformer (T5), is a Transformer based model that uses a text-to-text approach for translation, question answering, and classification. It introduces an unified framework that converts all text-based language problems into a text-to-text format for transfer learning for NLP. This allows for the use of the same model, loss function, hyperparameters, etc. across our diverse set of tasks.
Paper link :[Exploring the Limits of Transfer Learning with a Unified
Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
## Intended uses & limitations
This model contains just the `IPUConfig` files for running the T5 Small model (e.g. [HuggingFace/t5-small](https://huggingface.co/t5-small)) on Graphcore IPUs.
**This model contains no model weights, only an IPUConfig.**
## Usage
```
from optimum.graphcore import IPUConfig
ipu_config = IPUConfig.from_pretrained("Graphcore/t5-small-ipu")
``` | {"license": "apache-2.0"} | Graphcore/t5-small-ipu | null | [
"optimum_graphcore",
"arxiv:1910.10683",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1910.10683"
] | [] | TAGS
#optimum_graphcore #arxiv-1910.10683 #license-apache-2.0 #region-us
|
# Graphcore/t5-small-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
Text-to-Text Transfer Transformer (T5), is a Transformer based model that uses a text-to-text approach for translation, question answering, and classification. It introduces an unified framework that converts all text-based language problems into a text-to-text format for transfer learning for NLP. This allows for the use of the same model, loss function, hyperparameters, etc. across our diverse set of tasks.
Paper link :Exploring the Limits of Transfer Learning with a Unified
Text-to-Text Transformer
## Intended uses & limitations
This model contains just the 'IPUConfig' files for running the T5 Small model (e.g. HuggingFace/t5-small) on Graphcore IPUs.
This model contains no model weights, only an IPUConfig.
## Usage
| [
"# Graphcore/t5-small-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\n\nText-to-Text Transfer Transformer (T5), is a Transformer based model that uses a text-to-text approach for translation, question answering, and classification. It introduces an unified framework that converts all text-based language problems into a text-to-text format for transfer learning for NLP. This allows for the use of the same model, loss function, hyperparameters, etc. across our diverse set of tasks. \n\nPaper link :Exploring the Limits of Transfer Learning with a Unified\nText-to-Text Transformer",
"## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the T5 Small model (e.g. HuggingFace/t5-small) on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.",
"## Usage"
] | [
"TAGS\n#optimum_graphcore #arxiv-1910.10683 #license-apache-2.0 #region-us \n",
"# Graphcore/t5-small-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\n\nText-to-Text Transfer Transformer (T5), is a Transformer based model that uses a text-to-text approach for translation, question answering, and classification. It introduces an unified framework that converts all text-based language problems into a text-to-text format for transfer learning for NLP. This allows for the use of the same model, loss function, hyperparameters, etc. across our diverse set of tasks. \n\nPaper link :Exploring the Limits of Transfer Learning with a Unified\nText-to-Text Transformer",
"## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the T5 Small model (e.g. HuggingFace/t5-small) on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.",
"## Usage"
] | [
29,
198,
115,
60,
3
] | [
"TAGS\n#optimum_graphcore #arxiv-1910.10683 #license-apache-2.0 #region-us \n# Graphcore/t5-small-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.## Model description\n\nText-to-Text Transfer Transformer (T5), is a Transformer based model that uses a text-to-text approach for translation, question answering, and classification. It introduces an unified framework that converts all text-based language problems into a text-to-text format for transfer learning for NLP. This allows for the use of the same model, loss function, hyperparameters, etc. across our diverse set of tasks. \n\nPaper link :Exploring the Limits of Transfer Learning with a Unified\nText-to-Text Transformer## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the T5 Small model (e.g. HuggingFace/t5-small) on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.## Usage"
] |
null | null | # Graphcore/vit-base-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
The Vision Transformer (ViT) is a model for image recognition that employs a Transformer-like architecture over patches of the image which was widely used for NLP pretraining.
It uses a standard Transformer encoder as used in NLP and simple, yet scalable, strategy works surprisingly well when coupled with pre-training on large amounts of dataset and tranferred to multiple size image recognition benchmarks while requiring substantially fewer computational resources to train.
Paper link : [AN IMAGE IS WORTH 16X16 WORDS:TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE](https://arxiv.org/pdf/2010.11929.pdf)
## Intended uses & limitations
This model contains just the `IPUConfig` files for running the ViT base model (e.g. [vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) or [deit-base-patch16-384](https://huggingface.co/facebook/deit-base-patch16-384)) on Graphcore IPUs.
**This model contains no model weights, only an IPUConfig.**
## Usage
```
from optimum.graphcore import IPUConfig
ipu_config = IPUConfig.from_pretrained("Graphcore/vit-base-ipu")
``` | {} | Graphcore/vit-base-ipu | null | [
"optimum_graphcore",
"arxiv:2010.11929",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2010.11929"
] | [] | TAGS
#optimum_graphcore #arxiv-2010.11929 #region-us
| # Graphcore/vit-base-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
The Vision Transformer (ViT) is a model for image recognition that employs a Transformer-like architecture over patches of the image which was widely used for NLP pretraining.
It uses a standard Transformer encoder as used in NLP and simple, yet scalable, strategy works surprisingly well when coupled with pre-training on large amounts of dataset and tranferred to multiple size image recognition benchmarks while requiring substantially fewer computational resources to train.
Paper link : AN IMAGE IS WORTH 16X16 WORDS:TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE
## Intended uses & limitations
This model contains just the 'IPUConfig' files for running the ViT base model (e.g. vit-base-patch16-224-in21k or deit-base-patch16-384) on Graphcore IPUs.
This model contains no model weights, only an IPUConfig.
## Usage
| [
"# Graphcore/vit-base-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\n\nThe Vision Transformer (ViT) is a model for image recognition that employs a Transformer-like architecture over patches of the image which was widely used for NLP pretraining. \n\nIt uses a standard Transformer encoder as used in NLP and simple, yet scalable, strategy works surprisingly well when coupled with pre-training on large amounts of dataset and tranferred to multiple size image recognition benchmarks while requiring substantially fewer computational resources to train. \n \nPaper link : AN IMAGE IS WORTH 16X16 WORDS:TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE",
"## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the ViT base model (e.g. vit-base-patch16-224-in21k or deit-base-patch16-384) on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.",
"## Usage"
] | [
"TAGS\n#optimum_graphcore #arxiv-2010.11929 #region-us \n",
"# Graphcore/vit-base-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.",
"## Model description\n\nThe Vision Transformer (ViT) is a model for image recognition that employs a Transformer-like architecture over patches of the image which was widely used for NLP pretraining. \n\nIt uses a standard Transformer encoder as used in NLP and simple, yet scalable, strategy works surprisingly well when coupled with pre-training on large amounts of dataset and tranferred to multiple size image recognition benchmarks while requiring substantially fewer computational resources to train. \n \nPaper link : AN IMAGE IS WORTH 16X16 WORDS:TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE",
"## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the ViT base model (e.g. vit-base-patch16-224-in21k or deit-base-patch16-384) on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.",
"## Usage"
] | [
20,
198,
117,
77,
3
] | [
"TAGS\n#optimum_graphcore #arxiv-2010.11929 #region-us \n# Graphcore/vit-base-ipu\n\nOptimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at URL\n\nThrough HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.## Model description\n\nThe Vision Transformer (ViT) is a model for image recognition that employs a Transformer-like architecture over patches of the image which was widely used for NLP pretraining. \n\nIt uses a standard Transformer encoder as used in NLP and simple, yet scalable, strategy works surprisingly well when coupled with pre-training on large amounts of dataset and tranferred to multiple size image recognition benchmarks while requiring substantially fewer computational resources to train. \n \nPaper link : AN IMAGE IS WORTH 16X16 WORDS:TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE## Intended uses & limitations\n\nThis model contains just the 'IPUConfig' files for running the ViT base model (e.g. vit-base-patch16-224-in21k or deit-base-patch16-384) on Graphcore IPUs.\n\nThis model contains no model weights, only an IPUConfig.## Usage"
] |
null | adapter-transformers |
# Adapter `Gregor/bert-base-multilingual-cased-wmt21-qe` for bert-base-multilingual-cased
An [adapter](https://adapterhub.ml) for the bert-base-multilingual-cased model that was trained on the [quality_estimation/wmt21](https://adapterhub.ml/explore/quality_estimation/wmt21/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-multilingual-cased")
adapter_name = model.load_adapter("Gregor/bert-base-multilingual-cased-wmt21-qe")
model.active_adapters = adapter_name
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> | {"tags": ["adapter-transformers", "adapterhub:quality_estimation/wmt21", "bert"]} | Gregor/bert-base-multilingual-cased-wmt21-qe | null | [
"adapter-transformers",
"bert",
"adapterhub:quality_estimation/wmt21",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#adapter-transformers #bert #adapterhub-quality_estimation/wmt21 #region-us
|
# Adapter 'Gregor/bert-base-multilingual-cased-wmt21-qe' for bert-base-multilingual-cased
An adapter for the bert-base-multilingual-cased model that was trained on the quality_estimation/wmt21 dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
| [
"# Adapter 'Gregor/bert-base-multilingual-cased-wmt21-qe' for bert-base-multilingual-cased\n\nAn adapter for the bert-base-multilingual-cased model that was trained on the quality_estimation/wmt21 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] | [
"TAGS\n#adapter-transformers #bert #adapterhub-quality_estimation/wmt21 #region-us \n",
"# Adapter 'Gregor/bert-base-multilingual-cased-wmt21-qe' for bert-base-multilingual-cased\n\nAn adapter for the bert-base-multilingual-cased model that was trained on the quality_estimation/wmt21 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] | [
25,
88,
53,
5,
4
] | [
"TAGS\n#adapter-transformers #bert #adapterhub-quality_estimation/wmt21 #region-us \n# Adapter 'Gregor/bert-base-multilingual-cased-wmt21-qe' for bert-base-multilingual-cased\n\nAn adapter for the bert-base-multilingual-cased model that was trained on the quality_estimation/wmt21 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training## Evaluation results"
] |
null | adapter-transformers |
# Adapter `Gregor/xlm-roberta-base-wmt21-qe` for xlm-roberta-base
An [adapter](https://adapterhub.ml) for the xlm-roberta-base model that was trained on the [quality_estimation/wmt21](https://adapterhub.ml/explore/quality_estimation/wmt21/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("xlm-roberta-base")
adapter_name = model.load_adapter("Gregor/xlm-roberta-base-wmt21-qe")
model.active_adapters = adapter_name
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> | {"tags": ["adapter-transformers", "adapterhub:quality_estimation/wmt21", "xlm-roberta"]} | Gregor/xlm-roberta-base-wmt21-qe | null | [
"adapter-transformers",
"xlm-roberta",
"adapterhub:quality_estimation/wmt21",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#adapter-transformers #xlm-roberta #adapterhub-quality_estimation/wmt21 #region-us
|
# Adapter 'Gregor/xlm-roberta-base-wmt21-qe' for xlm-roberta-base
An adapter for the xlm-roberta-base model that was trained on the quality_estimation/wmt21 dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
| [
"# Adapter 'Gregor/xlm-roberta-base-wmt21-qe' for xlm-roberta-base\n\nAn adapter for the xlm-roberta-base model that was trained on the quality_estimation/wmt21 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] | [
"TAGS\n#adapter-transformers #xlm-roberta #adapterhub-quality_estimation/wmt21 #region-us \n",
"# Adapter 'Gregor/xlm-roberta-base-wmt21-qe' for xlm-roberta-base\n\nAn adapter for the xlm-roberta-base model that was trained on the quality_estimation/wmt21 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] | [
28,
76,
53,
5,
4
] | [
"TAGS\n#adapter-transformers #xlm-roberta #adapterhub-quality_estimation/wmt21 #region-us \n# Adapter 'Gregor/xlm-roberta-base-wmt21-qe' for xlm-roberta-base\n\nAn adapter for the xlm-roberta-base model that was trained on the quality_estimation/wmt21 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training## Evaluation results"
] |
null | adapter-transformers |
# Adapter `Gregor/xlm-roberta-large-wmt21-qe` for xlm-roberta-large
An [adapter](https://adapterhub.ml) for the xlm-roberta-large model that was trained on the [quality_estimation/wmt21](https://adapterhub.ml/explore/quality_estimation/wmt21/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("xlm-roberta-large")
adapter_name = model.load_adapter("Gregor/xlm-roberta-large-wmt21-qe")
model.active_adapters = adapter_name
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> | {"tags": ["adapter-transformers", "xlm-roberta", "adapterhub:quality_estimation/wmt21"]} | Gregor/xlm-roberta-large-wmt21-qe | null | [
"adapter-transformers",
"xlm-roberta",
"adapterhub:quality_estimation/wmt21",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#adapter-transformers #xlm-roberta #adapterhub-quality_estimation/wmt21 #region-us
|
# Adapter 'Gregor/xlm-roberta-large-wmt21-qe' for xlm-roberta-large
An adapter for the xlm-roberta-large model that was trained on the quality_estimation/wmt21 dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
| [
"# Adapter 'Gregor/xlm-roberta-large-wmt21-qe' for xlm-roberta-large\n\nAn adapter for the xlm-roberta-large model that was trained on the quality_estimation/wmt21 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] | [
"TAGS\n#adapter-transformers #xlm-roberta #adapterhub-quality_estimation/wmt21 #region-us \n",
"# Adapter 'Gregor/xlm-roberta-large-wmt21-qe' for xlm-roberta-large\n\nAn adapter for the xlm-roberta-large model that was trained on the quality_estimation/wmt21 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] | [
28,
76,
53,
5,
4
] | [
"TAGS\n#adapter-transformers #xlm-roberta #adapterhub-quality_estimation/wmt21 #region-us \n# Adapter 'Gregor/xlm-roberta-large-wmt21-qe' for xlm-roberta-large\n\nAn adapter for the xlm-roberta-large model that was trained on the quality_estimation/wmt21 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training## Evaluation results"
] |
text-generation | transformers |
# rick and morty | {"tags": ["conversational", "PyTorch", "Transformers", "gpt2", "lm-head", "causal-lm", "text-generation"]} | Gregor-Davies/DialoGPT-small-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"PyTorch",
"Transformers",
"lm-head",
"causal-lm",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #PyTorch #Transformers #lm-head #causal-lm #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# rick and morty | [
"# rick and morty"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #PyTorch #Transformers #lm-head #causal-lm #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# rick and morty"
] | [
56,
5
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #PyTorch #Transformers #lm-head #causal-lm #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# rick and morty"
] |
text-generation | transformers |
# The Owl House DialoGPT Model | {"tags": ["conversational"]} | Greysan/DialoGPT-medium-TOH | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# The Owl House DialoGPT Model | [
"# The Owl House DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# The Owl House DialoGPT Model"
] | [
39,
8
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# The Owl House DialoGPT Model"
] |
fill-mask | transformers |
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- 📝 [Paper](https://arxiv.org/abs/2105.02855)
- 💻 [Code](https://github.com/wietsedv/low-resource-adapt)
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`).
- 🤗 [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language)
- 🤗 [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
| {"language": "fy", "tags": ["BERTje"]} | GroNLP/bert-base-dutch-cased-frisian | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"BERTje",
"fy",
"arxiv:2105.02855",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2105.02855"
] | [
"fy"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #BERTje #fy #arxiv-2105.02855 #autotrain_compatible #endpoints_compatible #region-us
|
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- Paper
- Code
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').
- 'GroNLP/bert-base-dutch-cased' (Dutch; source language)
- 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)
- 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)
- 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)
- 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)
| [
"# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High\n\nThis model is part of this paper + code:\n\n- Paper\n- Code",
"## Models\n\nThe best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:",
"### Lexical layers\nThese models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').\n\n - 'GroNLP/bert-base-dutch-cased' (Dutch; source language)\n - 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)",
"### POS tagging\nThese models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.\n\n - 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)"
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #BERTje #fy #arxiv-2105.02855 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High\n\nThis model is part of this paper + code:\n\n- Paper\n- Code",
"## Models\n\nThe best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:",
"### Lexical layers\nThese models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').\n\n - 'GroNLP/bert-base-dutch-cased' (Dutch; source language)\n - 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)",
"### POS tagging\nThese models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.\n\n - 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)"
] | [
50,
30,
27,
108,
124
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #BERTje #fy #arxiv-2105.02855 #autotrain_compatible #endpoints_compatible #region-us \n# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High\n\nThis model is part of this paper + code:\n\n- Paper\n- Code## Models\n\nThe best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:### Lexical layers\nThese models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').\n\n - 'GroNLP/bert-base-dutch-cased' (Dutch; source language)\n - 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)### POS tagging\nThese models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.\n\n - 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)"
] |
fill-mask | transformers |
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- 📝 [Paper](https://arxiv.org/abs/2105.02855)
- 💻 [Code](https://github.com/wietsedv/low-resource-adapt)
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`).
- 🤗 [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language)
- 🤗 [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
| {"language": "gos", "tags": ["BERTje"]} | GroNLP/bert-base-dutch-cased-gronings | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"BERTje",
"gos",
"arxiv:2105.02855",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2105.02855"
] | [
"gos"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #BERTje #gos #arxiv-2105.02855 #autotrain_compatible #endpoints_compatible #region-us
|
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- Paper
- Code
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').
- 'GroNLP/bert-base-dutch-cased' (Dutch; source language)
- 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)
- 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)
- 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)
- 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)
| [
"# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High\n\nThis model is part of this paper + code:\n\n- Paper\n- Code",
"## Models\n\nThe best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:",
"### Lexical layers\nThese models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').\n\n - 'GroNLP/bert-base-dutch-cased' (Dutch; source language)\n - 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)",
"### POS tagging\nThese models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.\n\n - 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)"
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #BERTje #gos #arxiv-2105.02855 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High\n\nThis model is part of this paper + code:\n\n- Paper\n- Code",
"## Models\n\nThe best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:",
"### Lexical layers\nThese models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').\n\n - 'GroNLP/bert-base-dutch-cased' (Dutch; source language)\n - 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)",
"### POS tagging\nThese models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.\n\n - 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)"
] | [
50,
30,
27,
108,
124
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #BERTje #gos #arxiv-2105.02855 #autotrain_compatible #endpoints_compatible #region-us \n# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High\n\nThis model is part of this paper + code:\n\n- Paper\n- Code## Models\n\nThe best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:### Lexical layers\nThese models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').\n\n - 'GroNLP/bert-base-dutch-cased' (Dutch; source language)\n - 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)### POS tagging\nThese models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.\n\n - 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)"
] |
token-classification | transformers |
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- 📝 [Paper](https://arxiv.org/abs/2105.02855)
- 💻 [Code](https://github.com/wietsedv/low-resource-adapt)
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`).
- 🤗 [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language)
- 🤗 [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
| {"language": "fy", "tags": ["BERTje", "pos"]} | GroNLP/bert-base-dutch-cased-upos-alpino-frisian | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"BERTje",
"pos",
"fy",
"arxiv:2105.02855",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2105.02855"
] | [
"fy"
] | TAGS
#transformers #pytorch #tf #jax #bert #token-classification #BERTje #pos #fy #arxiv-2105.02855 #autotrain_compatible #endpoints_compatible #region-us
|
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- Paper
- Code
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').
- 'GroNLP/bert-base-dutch-cased' (Dutch; source language)
- 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)
- 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)
- 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)
- 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)
| [
"# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High\n\nThis model is part of this paper + code:\n\n- Paper\n- Code",
"## Models\n\nThe best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:",
"### Lexical layers\nThese models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').\n\n - 'GroNLP/bert-base-dutch-cased' (Dutch; source language)\n - 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)",
"### POS tagging\nThese models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.\n\n - 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)"
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #BERTje #pos #fy #arxiv-2105.02855 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High\n\nThis model is part of this paper + code:\n\n- Paper\n- Code",
"## Models\n\nThe best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:",
"### Lexical layers\nThese models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').\n\n - 'GroNLP/bert-base-dutch-cased' (Dutch; source language)\n - 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)",
"### POS tagging\nThese models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.\n\n - 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)"
] | [
53,
30,
27,
108,
124
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #BERTje #pos #fy #arxiv-2105.02855 #autotrain_compatible #endpoints_compatible #region-us \n# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High\n\nThis model is part of this paper + code:\n\n- Paper\n- Code## Models\n\nThe best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:### Lexical layers\nThese models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').\n\n - 'GroNLP/bert-base-dutch-cased' (Dutch; source language)\n - 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)### POS tagging\nThese models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.\n\n - 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)"
] |
token-classification | transformers |
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- 📝 [Paper](https://arxiv.org/abs/2105.02855)
- 💻 [Code](https://github.com/wietsedv/low-resource-adapt)
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`).
- 🤗 [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language)
- 🤗 [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
| {"language": "gos", "tags": ["BERTje", "pos"]} | GroNLP/bert-base-dutch-cased-upos-alpino-gronings | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"BERTje",
"pos",
"gos",
"arxiv:2105.02855",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2105.02855"
] | [
"gos"
] | TAGS
#transformers #pytorch #tf #jax #bert #token-classification #BERTje #pos #gos #arxiv-2105.02855 #autotrain_compatible #endpoints_compatible #region-us
|
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- Paper
- Code
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').
- 'GroNLP/bert-base-dutch-cased' (Dutch; source language)
- 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)
- 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)
- 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)
- 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)
| [
"# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High\n\nThis model is part of this paper + code:\n\n- Paper\n- Code",
"## Models\n\nThe best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:",
"### Lexical layers\nThese models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').\n\n - 'GroNLP/bert-base-dutch-cased' (Dutch; source language)\n - 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)",
"### POS tagging\nThese models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.\n\n - 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)"
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #BERTje #pos #gos #arxiv-2105.02855 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High\n\nThis model is part of this paper + code:\n\n- Paper\n- Code",
"## Models\n\nThe best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:",
"### Lexical layers\nThese models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').\n\n - 'GroNLP/bert-base-dutch-cased' (Dutch; source language)\n - 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)",
"### POS tagging\nThese models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.\n\n - 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)"
] | [
53,
30,
27,
108,
124
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #BERTje #pos #gos #arxiv-2105.02855 #autotrain_compatible #endpoints_compatible #region-us \n# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High\n\nThis model is part of this paper + code:\n\n- Paper\n- Code## Models\n\nThe best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:### Lexical layers\nThese models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').\n\n - 'GroNLP/bert-base-dutch-cased' (Dutch; source language)\n - 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)### POS tagging\nThese models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.\n\n - 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)"
] |
token-classification | transformers |
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- 📝 [Paper](https://arxiv.org/abs/2105.02855)
- 💻 [Code](https://github.com/wietsedv/low-resource-adapt)
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`).
- 🤗 [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language)
- 🤗 [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
| {"language": "nl", "tags": ["BERTje", "pos"]} | GroNLP/bert-base-dutch-cased-upos-alpino | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"token-classification",
"BERTje",
"pos",
"nl",
"arxiv:2105.02855",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2105.02855"
] | [
"nl"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #token-classification #BERTje #pos #nl #arxiv-2105.02855 #autotrain_compatible #endpoints_compatible #region-us
|
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- Paper
- Code
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').
- 'GroNLP/bert-base-dutch-cased' (Dutch; source language)
- 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)
- 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)
- 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)
- 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)
| [
"# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High\n\nThis model is part of this paper + code:\n\n- Paper\n- Code",
"## Models\n\nThe best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:",
"### Lexical layers\nThese models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').\n\n - 'GroNLP/bert-base-dutch-cased' (Dutch; source language)\n - 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)",
"### POS tagging\nThese models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.\n\n - 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #token-classification #BERTje #pos #nl #arxiv-2105.02855 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High\n\nThis model is part of this paper + code:\n\n- Paper\n- Code",
"## Models\n\nThe best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:",
"### Lexical layers\nThese models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').\n\n - 'GroNLP/bert-base-dutch-cased' (Dutch; source language)\n - 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)",
"### POS tagging\nThese models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.\n\n - 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)"
] | [
56,
30,
27,
108,
124
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #token-classification #BERTje #pos #nl #arxiv-2105.02855 #autotrain_compatible #endpoints_compatible #region-us \n# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High\n\nThis model is part of this paper + code:\n\n- Paper\n- Code## Models\n\nThe best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:### Lexical layers\nThese models are identical to BERTje, but with different lexical layers ('bert.embeddings.word_embeddings').\n\n - 'GroNLP/bert-base-dutch-cased' (Dutch; source language)\n - 'GroNLP/bert-base-dutch-cased-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-frisian' (West Frisian)### POS tagging\nThese models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.\n\n - 'GroNLP/bert-base-dutch-cased-upos-alpino' (Dutch)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-gronings' (Gronings)\n - 'GroNLP/bert-base-dutch-cased-upos-alpino-frisian' (West Frisian)"
] |
fill-mask | transformers |
# BERTje: A Dutch BERT model
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Andreas van Cranenburgh](https://www.semanticscholar.org/author/Andreas-van-Cranenburgh/2791585) •
[Arianna Bisazza](https://www.semanticscholar.org/author/Arianna-Bisazza/3242253) •
[Tommaso Caselli](https://www.semanticscholar.org/author/Tommaso-Caselli/1864635) •
[Gertjan van Noord](https://www.semanticscholar.org/author/Gertjan-van-Noord/143715131) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
BERTje is a Dutch pre-trained BERT model developed at the University of Groningen.
<img src="https://raw.githubusercontent.com/wietsedv/bertje/master/bertje.png" height="250">
For details, check out our paper on [arXiv](https://arxiv.org/abs/1912.09582), the code on [Github](https://github.com/wietsedv/bertje) and related work on [Semantic Scholar](https://www.semanticscholar.org/paper/BERTje%3A-A-Dutch-BERT-Model-Vries-Cranenburgh/a4d5e425cac0bf84c86c0c9f720b6339d6288ffa).
The paper and Github page mention fine-tuned models that are available [here](https://huggingface.co/wietsedv).
## How to use
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/bert-base-dutch-cased")
model = AutoModel.from_pretrained("GroNLP/bert-base-dutch-cased") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/bert-base-dutch-cased") # Tensorflow
```
**WARNING:** The vocabulary size of BERTje has changed in 2021. If you use an older fine-tuned model and experience problems with the `GroNLP/bert-base-dutch-cased` tokenizer, use use the following tokenizer:
```python
tokenizer = AutoTokenizer.from_pretrained("GroNLP/bert-base-dutch-cased", revision="v1") # v1 is the old vocabulary
```
## Benchmarks
The arXiv paper lists benchmarks. Here are a couple of comparisons between BERTje, multilingual BERT, BERT-NL and RobBERT that were done after writing the paper. Unlike some other comparisons, the fine-tuning procedures for these benchmarks are identical for each pre-trained model. You may be able to achieve higher scores for individual models by optimizing fine-tuning procedures.
More experimental results will be added to this page when they are finished. Technical details about how a fine-tuned these models will be published later as well as downloadable fine-tuned checkpoints.
All of the tested models are *base* sized (12) layers with cased tokenization.
Headers in the tables below link to original data sources. Scores link to the model pages that corresponds to that specific fine-tuned model. These tables will be updated when more simple fine-tuned models are made available.
### Named Entity Recognition
| Model | [CoNLL-2002](https://www.clips.uantwerpen.be/conll2002/ner/) | [SoNaR-1](https://ivdnt.org/downloads/taalmaterialen/tstc-sonar-corpus) | spaCy UD LassySmall |
| ---------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| **BERTje** | [**90.24**](https://huggingface.co/wietsedv/bert-base-dutch-cased-finetuned-conll2002-ner) | [**84.93**](https://huggingface.co/wietsedv/bert-base-dutch-cased-finetuned-sonar-ner) | [86.10](https://huggingface.co/wietsedv/bert-base-dutch-cased-finetuned-udlassy-ner) |
| [mBERT](https://github.com/google-research/bert/blob/master/multilingual.md) | [88.61](https://huggingface.co/wietsedv/bert-base-multilingual-cased-finetuned-conll2002-ner) | [84.19](https://huggingface.co/wietsedv/bert-base-multilingual-cased-finetuned-sonar-ner) | [**86.77**](https://huggingface.co/wietsedv/bert-base-multilingual-cased-finetuned-udlassy-ner) |
| [BERT-NL](http://textdata.nl) | 85.05 | 80.45 | 81.62 |
| [RobBERT](https://github.com/iPieter/RobBERT) | 84.72 | 81.98 | 79.84 |
### Part-of-speech tagging
| Model | [UDv2.5 LassySmall](https://universaldependencies.org/treebanks/nl_lassysmall/index.html) |
| ---------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- |
| **BERTje** | **96.48** |
| [mBERT](https://github.com/google-research/bert/blob/master/multilingual.md) | 96.20 |
| [BERT-NL](http://textdata.nl) | 96.10 |
| [RobBERT](https://github.com/iPieter/RobBERT) | 95.91 |
### BibTeX entry and citation info
```bibtex
@misc{devries2019bertje,
\ttitle = {{BERTje}: {A} {Dutch} {BERT} {Model}},
\tshorttitle = {{BERTje}},
\tauthor = {de Vries, Wietse and van Cranenburgh, Andreas and Bisazza, Arianna and Caselli, Tommaso and Noord, Gertjan van and Nissim, Malvina},
\tyear = {2019},
\tmonth = dec,
\thowpublished = {arXiv:1912.09582},
\turl = {http://arxiv.org/abs/1912.09582},
}
```
| {"language": "nl", "tags": ["BERTje"], "thumbnail": "https://raw.githubusercontent.com/wietsedv/bertje/master/bertje.png"} | GroNLP/bert-base-dutch-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"BERTje",
"nl",
"arxiv:1912.09582",
"doi:10.57967/hf/0149",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1912.09582"
] | [
"nl"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #BERTje #nl #arxiv-1912.09582 #doi-10.57967/hf/0149 #autotrain_compatible #endpoints_compatible #has_space #region-us
| BERTje: A Dutch BERT model
==========================
Wietse de Vries •
Andreas van Cranenburgh •
Arianna Bisazza •
Tommaso Caselli •
Gertjan van Noord •
Malvina Nissim
Model description
-----------------
BERTje is a Dutch pre-trained BERT model developed at the University of Groningen.
<img src="URL height="250">
For details, check out our paper on arXiv, the code on Github and related work on Semantic Scholar.
The paper and Github page mention fine-tuned models that are available here.
How to use
----------
WARNING: The vocabulary size of BERTje has changed in 2021. If you use an older fine-tuned model and experience problems with the 'GroNLP/bert-base-dutch-cased' tokenizer, use use the following tokenizer:
Benchmarks
----------
The arXiv paper lists benchmarks. Here are a couple of comparisons between BERTje, multilingual BERT, BERT-NL and RobBERT that were done after writing the paper. Unlike some other comparisons, the fine-tuning procedures for these benchmarks are identical for each pre-trained model. You may be able to achieve higher scores for individual models by optimizing fine-tuning procedures.
More experimental results will be added to this page when they are finished. Technical details about how a fine-tuned these models will be published later as well as downloadable fine-tuned checkpoints.
All of the tested models are *base* sized (12) layers with cased tokenization.
Headers in the tables below link to original data sources. Scores link to the model pages that corresponds to that specific fine-tuned model. These tables will be updated when more simple fine-tuned models are made available.
### Named Entity Recognition
### Part-of-speech tagging
### BibTeX entry and citation info
| [
"### Named Entity Recognition",
"### Part-of-speech tagging",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #BERTje #nl #arxiv-1912.09582 #doi-10.57967/hf/0149 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Named Entity Recognition",
"### Part-of-speech tagging",
"### BibTeX entry and citation info"
] | [
71,
6,
10,
10
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #BERTje #nl #arxiv-1912.09582 #doi-10.57967/hf/0149 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### Named Entity Recognition### Part-of-speech tagging### BibTeX entry and citation info"
] |
text-generation | transformers |
# GPT-2 recycled for Dutch (medium, adapted lexical embeddings)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the medium OpenAI GPT-2 ([`gpt2-medium`](https://huggingface.co/gpt2-medium)) model.
The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for a Dutch vocabulary.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-medium-dutch-embeddings")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-medium-dutch-embeddings")
model = AutoModel.from_pretrained("GroNLP/gpt2-medium-dutch-embeddings") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-medium-dutch-embeddings") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {"language": "nl", "tags": ["adaption", "recycled", "gpt2-medium"], "pipeline_tag": "text-generation"} | GroNLP/gpt2-medium-dutch-embeddings | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"gpt2",
"text-generation",
"adaption",
"recycled",
"gpt2-medium",
"nl",
"arxiv:2012.05628",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2012.05628"
] | [
"nl"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #adaption #recycled #gpt2-medium #nl #arxiv-2012.05628 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GPT-2 recycled for Dutch (medium, adapted lexical embeddings)
Wietse de Vries •
Malvina Nissim
## Model description
This model is based on the medium OpenAI GPT-2 ('gpt2-medium') model.
The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for a Dutch vocabulary.
For details, check out our paper on arXiv and the code on Github.
## Related models
### Dutch
- 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.
- 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)
- 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.
### Italian
- 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.
- 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)
- 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.
## How to use
## BibTeX entry
| [
"# GPT-2 recycled for Dutch (medium, adapted lexical embeddings)\nWietse de Vries •\nMalvina Nissim",
"## Model description\n\nThis model is based on the medium OpenAI GPT-2 ('gpt2-medium') model.\n\nThe Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for a Dutch vocabulary.\n\nFor details, check out our paper on arXiv and the code on Github.",
"## Related models",
"### Dutch\n - 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.",
"### Italian\n - 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.",
"## How to use",
"## BibTeX entry"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #adaption #recycled #gpt2-medium #nl #arxiv-2012.05628 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GPT-2 recycled for Dutch (medium, adapted lexical embeddings)\nWietse de Vries •\nMalvina Nissim",
"## Model description\n\nThis model is based on the medium OpenAI GPT-2 ('gpt2-medium') model.\n\nThe Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for a Dutch vocabulary.\n\nFor details, check out our paper on arXiv and the code on Github.",
"## Related models",
"### Dutch\n - 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.",
"### Italian\n - 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.",
"## How to use",
"## BibTeX entry"
] | [
68,
31,
78,
4,
103,
103,
5,
6
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #adaption #recycled #gpt2-medium #nl #arxiv-2012.05628 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# GPT-2 recycled for Dutch (medium, adapted lexical embeddings)\nWietse de Vries •\nMalvina Nissim## Model description\n\nThis model is based on the medium OpenAI GPT-2 ('gpt2-medium') model.\n\nThe Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for a Dutch vocabulary.\n\nFor details, check out our paper on arXiv and the code on Github.## Related models### Dutch\n - 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.### Italian\n - 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.## How to use## BibTeX entry"
] |
text-generation | transformers |
# GPT-2 recycled for Italian (medium, adapted lexical embeddings)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the medium OpenAI GPT-2 ([`gpt2-medium`](https://huggingface.co/gpt2-medium)) model.
The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for an Italian vocabulary.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-medium-italian-embeddings")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-medium-italian-embeddings")
model = AutoModel.from_pretrained("GroNLP/gpt2-medium-italian-embeddings") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-medium-italian-embeddings") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {"language": "it", "tags": ["adaption", "recycled", "gpt2-medium"], "pipeline_tag": "text-generation"} | GroNLP/gpt2-medium-italian-embeddings | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"gpt2",
"text-generation",
"adaption",
"recycled",
"gpt2-medium",
"it",
"arxiv:2012.05628",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2012.05628"
] | [
"it"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #adaption #recycled #gpt2-medium #it #arxiv-2012.05628 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GPT-2 recycled for Italian (medium, adapted lexical embeddings)
Wietse de Vries •
Malvina Nissim
## Model description
This model is based on the medium OpenAI GPT-2 ('gpt2-medium') model.
The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for an Italian vocabulary.
For details, check out our paper on arXiv and the code on Github.
## Related models
### Dutch
- 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.
- 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)
- 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.
### Italian
- 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.
- 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)
- 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.
## How to use
## BibTeX entry
| [
"# GPT-2 recycled for Italian (medium, adapted lexical embeddings)\nWietse de Vries •\nMalvina Nissim",
"## Model description\n\nThis model is based on the medium OpenAI GPT-2 ('gpt2-medium') model.\n\nThe Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for an Italian vocabulary.\n\nFor details, check out our paper on arXiv and the code on Github.",
"## Related models",
"### Dutch\n - 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.",
"### Italian\n - 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.",
"## How to use",
"## BibTeX entry"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #adaption #recycled #gpt2-medium #it #arxiv-2012.05628 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GPT-2 recycled for Italian (medium, adapted lexical embeddings)\nWietse de Vries •\nMalvina Nissim",
"## Model description\n\nThis model is based on the medium OpenAI GPT-2 ('gpt2-medium') model.\n\nThe Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for an Italian vocabulary.\n\nFor details, check out our paper on arXiv and the code on Github.",
"## Related models",
"### Dutch\n - 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.",
"### Italian\n - 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.",
"## How to use",
"## BibTeX entry"
] | [
68,
31,
78,
4,
103,
103,
5,
6
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #adaption #recycled #gpt2-medium #it #arxiv-2012.05628 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# GPT-2 recycled for Italian (medium, adapted lexical embeddings)\nWietse de Vries •\nMalvina Nissim## Model description\n\nThis model is based on the medium OpenAI GPT-2 ('gpt2-medium') model.\n\nThe Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for an Italian vocabulary.\n\nFor details, check out our paper on arXiv and the code on Github.## Related models### Dutch\n - 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.### Italian\n - 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.## How to use## BibTeX entry"
] |
text-generation | transformers |
# GPT-2 recycled for Dutch (small, adapted lexical embeddings)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model.
The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for a Dutch vocabulary.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-small-dutch-embeddings")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-dutch-embeddings")
model = AutoModel.from_pretrained("GroNLP/gpt2-small-dutch-embeddings") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-dutch-embeddings") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {"language": "nl", "tags": ["adaption", "recycled", "gpt2-small"], "pipeline_tag": "text-generation"} | GroNLP/gpt2-small-dutch-embeddings | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"gpt2",
"text-generation",
"adaption",
"recycled",
"gpt2-small",
"nl",
"arxiv:2012.05628",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2012.05628"
] | [
"nl"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #adaption #recycled #gpt2-small #nl #arxiv-2012.05628 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GPT-2 recycled for Dutch (small, adapted lexical embeddings)
Wietse de Vries •
Malvina Nissim
## Model description
This model is based on the small OpenAI GPT-2 ('gpt2') model.
The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for a Dutch vocabulary.
For details, check out our paper on arXiv and the code on Github.
## Related models
### Dutch
- 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.
- 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)
- 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.
### Italian
- 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.
- 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)
- 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.
## How to use
## BibTeX entry
| [
"# GPT-2 recycled for Dutch (small, adapted lexical embeddings)\nWietse de Vries •\nMalvina Nissim",
"## Model description\n\nThis model is based on the small OpenAI GPT-2 ('gpt2') model.\n\nThe Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for a Dutch vocabulary.\n\nFor details, check out our paper on arXiv and the code on Github.",
"## Related models",
"### Dutch\n - 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.",
"### Italian\n - 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.",
"## How to use",
"## BibTeX entry"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #adaption #recycled #gpt2-small #nl #arxiv-2012.05628 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GPT-2 recycled for Dutch (small, adapted lexical embeddings)\nWietse de Vries •\nMalvina Nissim",
"## Model description\n\nThis model is based on the small OpenAI GPT-2 ('gpt2') model.\n\nThe Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for a Dutch vocabulary.\n\nFor details, check out our paper on arXiv and the code on Github.",
"## Related models",
"### Dutch\n - 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.",
"### Italian\n - 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.",
"## How to use",
"## BibTeX entry"
] | [
68,
31,
76,
4,
103,
103,
5,
6
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #adaption #recycled #gpt2-small #nl #arxiv-2012.05628 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# GPT-2 recycled for Dutch (small, adapted lexical embeddings)\nWietse de Vries •\nMalvina Nissim## Model description\n\nThis model is based on the small OpenAI GPT-2 ('gpt2') model.\n\nThe Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for a Dutch vocabulary.\n\nFor details, check out our paper on arXiv and the code on Github.## Related models### Dutch\n - 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.### Italian\n - 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.## How to use## BibTeX entry"
] |
text-generation | transformers |
# GPT-2 recycled for Dutch (small)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-small-dutch")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-dutch")
model = AutoModel.from_pretrained("GroNLP/gpt2-small-dutch") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-dutch") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {"language": "nl", "tags": ["adaption", "recycled", "gpt2-small"], "pipeline_tag": "text-generation"} | GroNLP/gpt2-small-dutch | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"gpt2",
"text-generation",
"adaption",
"recycled",
"gpt2-small",
"nl",
"arxiv:2012.05628",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2012.05628"
] | [
"nl"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #adaption #recycled #gpt2-small #nl #arxiv-2012.05628 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# GPT-2 recycled for Dutch (small)
Wietse de Vries •
Malvina Nissim
## Model description
This model is based on the small OpenAI GPT-2 ('gpt2') model.
For details, check out our paper on arXiv and the code on Github.
## Related models
### Dutch
- 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.
- 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)
- 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.
### Italian
- 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.
- 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)
- 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.
## How to use
## BibTeX entry
| [
"# GPT-2 recycled for Dutch (small)\nWietse de Vries •\nMalvina Nissim",
"## Model description\n\nThis model is based on the small OpenAI GPT-2 ('gpt2') model.\n\nFor details, check out our paper on arXiv and the code on Github.",
"## Related models",
"### Dutch\n - 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.",
"### Italian\n - 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.",
"## How to use",
"## BibTeX entry"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #adaption #recycled #gpt2-small #nl #arxiv-2012.05628 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# GPT-2 recycled for Dutch (small)\nWietse de Vries •\nMalvina Nissim",
"## Model description\n\nThis model is based on the small OpenAI GPT-2 ('gpt2') model.\n\nFor details, check out our paper on arXiv and the code on Github.",
"## Related models",
"### Dutch\n - 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.",
"### Italian\n - 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.",
"## How to use",
"## BibTeX entry"
] | [
72,
23,
45,
4,
103,
103,
5,
6
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #adaption #recycled #gpt2-small #nl #arxiv-2012.05628 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# GPT-2 recycled for Dutch (small)\nWietse de Vries •\nMalvina Nissim## Model description\n\nThis model is based on the small OpenAI GPT-2 ('gpt2') model.\n\nFor details, check out our paper on arXiv and the code on Github.## Related models### Dutch\n - 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.### Italian\n - 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.## How to use## BibTeX entry"
] |
text-generation | transformers |
# GPT-2 recycled for Italian (small, adapted lexical embeddings)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model.
The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for an Italian vocabulary.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-small-italian-embeddings")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-italian-embeddings")
model = AutoModel.from_pretrained("GroNLP/gpt2-small-italian-embeddings") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-italian-embeddings") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {"language": "it", "tags": ["adaption", "recycled", "gpt2-small"], "pipeline_tag": "text-generation"} | GroNLP/gpt2-small-italian-embeddings | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"gpt2",
"text-generation",
"adaption",
"recycled",
"gpt2-small",
"it",
"arxiv:2012.05628",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2012.05628"
] | [
"it"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #adaption #recycled #gpt2-small #it #arxiv-2012.05628 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# GPT-2 recycled for Italian (small, adapted lexical embeddings)
Wietse de Vries •
Malvina Nissim
## Model description
This model is based on the small OpenAI GPT-2 ('gpt2') model.
The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for an Italian vocabulary.
For details, check out our paper on arXiv and the code on Github.
## Related models
### Dutch
- 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.
- 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)
- 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.
### Italian
- 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.
- 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)
- 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.
## How to use
## BibTeX entry
| [
"# GPT-2 recycled for Italian (small, adapted lexical embeddings)\nWietse de Vries •\nMalvina Nissim",
"## Model description\n\nThis model is based on the small OpenAI GPT-2 ('gpt2') model.\n\nThe Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for an Italian vocabulary.\n\nFor details, check out our paper on arXiv and the code on Github.",
"## Related models",
"### Dutch\n - 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.",
"### Italian\n - 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.",
"## How to use",
"## BibTeX entry"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #adaption #recycled #gpt2-small #it #arxiv-2012.05628 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# GPT-2 recycled for Italian (small, adapted lexical embeddings)\nWietse de Vries •\nMalvina Nissim",
"## Model description\n\nThis model is based on the small OpenAI GPT-2 ('gpt2') model.\n\nThe Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for an Italian vocabulary.\n\nFor details, check out our paper on arXiv and the code on Github.",
"## Related models",
"### Dutch\n - 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.",
"### Italian\n - 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.",
"## How to use",
"## BibTeX entry"
] | [
72,
31,
76,
4,
103,
103,
5,
6
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #adaption #recycled #gpt2-small #it #arxiv-2012.05628 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# GPT-2 recycled for Italian (small, adapted lexical embeddings)\nWietse de Vries •\nMalvina Nissim## Model description\n\nThis model is based on the small OpenAI GPT-2 ('gpt2') model.\n\nThe Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for an Italian vocabulary.\n\nFor details, check out our paper on arXiv and the code on Github.## Related models### Dutch\n - 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.### Italian\n - 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.## How to use## BibTeX entry"
] |
text-generation | transformers |
# GPT-2 recycled for Italian (small)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-small-italian")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-italian")
model = AutoModel.from_pretrained("GroNLP/gpt2-small-italian") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-italian") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {"language": "it", "tags": ["adaption", "recycled", "gpt2-small"], "pipeline_tag": "text-generation"} | GroNLP/gpt2-small-italian | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"gpt2",
"text-generation",
"adaption",
"recycled",
"gpt2-small",
"it",
"arxiv:2012.05628",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2012.05628"
] | [
"it"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #adaption #recycled #gpt2-small #it #arxiv-2012.05628 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GPT-2 recycled for Italian (small)
Wietse de Vries •
Malvina Nissim
## Model description
This model is based on the small OpenAI GPT-2 ('gpt2') model.
For details, check out our paper on arXiv and the code on Github.
## Related models
### Dutch
- 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.
- 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)
- 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.
### Italian
- 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.
- 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)
- 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.
## How to use
## BibTeX entry
| [
"# GPT-2 recycled for Italian (small)\nWietse de Vries •\nMalvina Nissim",
"## Model description\n\nThis model is based on the small OpenAI GPT-2 ('gpt2') model.\n\nFor details, check out our paper on arXiv and the code on Github.",
"## Related models",
"### Dutch\n - 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.",
"### Italian\n - 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.",
"## How to use",
"## BibTeX entry"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #adaption #recycled #gpt2-small #it #arxiv-2012.05628 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GPT-2 recycled for Italian (small)\nWietse de Vries •\nMalvina Nissim",
"## Model description\n\nThis model is based on the small OpenAI GPT-2 ('gpt2') model.\n\nFor details, check out our paper on arXiv and the code on Github.",
"## Related models",
"### Dutch\n - 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.",
"### Italian\n - 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.",
"## How to use",
"## BibTeX entry"
] | [
68,
23,
45,
4,
103,
103,
5,
6
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #adaption #recycled #gpt2-small #it #arxiv-2012.05628 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# GPT-2 recycled for Italian (small)\nWietse de Vries •\nMalvina Nissim## Model description\n\nThis model is based on the small OpenAI GPT-2 ('gpt2') model.\n\nFor details, check out our paper on arXiv and the code on Github.## Related models### Dutch\n - 'gpt2-small-dutch-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-dutch': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-dutch-embeddings': Medium model size with only retrained lexical embeddings.### Italian\n - 'gpt2-small-italian-embeddings': Small model size with only retrained lexical embeddings.\n - 'gpt2-small-italian': Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (Recommended)\n - 'gpt2-medium-italian-embeddings': Medium model size with only retrained lexical embeddings.## How to use## BibTeX entry"
] |
fill-mask | transformers |
#
[Tommaso Caselli](https://www.semanticscholar.org/author/Tommaso-Caselli/1864635) •
[Valerio Basile](https://www.semanticscholar.org/author/Valerio-Basile/3101511) •
[Jelena Mitrovic](https://www.semanticscholar.org/author/Jelena-Mitrovic/145157863) •
[Michael Granizter](https://www.semanticscholar.org/author/M.-Granitzer/2389675)
## Model description
HateBERT is an English pre-trained BERT model obtained by further training the English BERT base uncased model with more than 1 million posts from banned communites from Reddit. The model has been developed as a collaboration between the University of Groningen, the university of Turin, and the University of Passau.
For details, check out the paper presented at [WOAH 2021](https://aclanthology.org/2021.woah-1.3/). The code and the fine-tuned models are available on [OSF](https://osf.io/tbd58/?view_onlycb79b3228d4248ddb875eb1803525ad8).
### BibTeX entry and citation info
```bibtex
@inproceedings{caselli-etal-2021-hatebert,
\ttitle = "{H}ate{BERT}: Retraining {BERT} for Abusive Language Detection in {E}nglish",
\tauthor = "Caselli, Tommaso and
Basile, Valerio and
Mitrovi{\'c}, Jelena and
Granitzer, Michael",
\tbooktitle = "Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)",
\tmonth = aug,
\tyear = "2021",
\taddress = "Online",
\tpublisher = "Association for Computational Linguistics",
\tturl = "https://aclanthology.org/2021.woah-1.3",
\tdoi = "10.18653/v1/2021.woah-1.3",
\tpages = "17--25",
\tabstract = "We introduce HateBERT, a re-trained BERT model for abusive language detection in English. The model was trained on RAL-E, a large-scale dataset of Reddit comments in English from communities banned for being offensive, abusive, or hateful that we have curated and made available to the public. We present the results of a detailed comparison between a general pre-trained language model and the retrained version on three English datasets for offensive, abusive language and hate speech detection tasks. In all datasets, HateBERT outperforms the corresponding general BERT model. We also discuss a battery of experiments comparing the portability of the fine-tuned models across the datasets, suggesting that portability is affected by compatibility of the annotated phenomena.",
}
``` | {"language": "en", "tags": ["HateBERT", "text classification", "abusive language", "hate speech", "offensive language"]} | GroNLP/hateBERT | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"HateBERT",
"text classification",
"abusive language",
"hate speech",
"offensive language",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #bert #fill-mask #HateBERT #text classification #abusive language #hate speech #offensive language #en #autotrain_compatible #endpoints_compatible #has_space #region-us
|
#
Tommaso Caselli •
Valerio Basile •
Jelena Mitrovic •
Michael Granizter
## Model description
HateBERT is an English pre-trained BERT model obtained by further training the English BERT base uncased model with more than 1 million posts from banned communites from Reddit. The model has been developed as a collaboration between the University of Groningen, the university of Turin, and the University of Passau.
For details, check out the paper presented at WOAH 2021. The code and the fine-tuned models are available on OSF.
### BibTeX entry and citation info
| [
"# \nTommaso Caselli •\nValerio Basile •\nJelena Mitrovic •\nMichael Granizter",
"## Model description\n\nHateBERT is an English pre-trained BERT model obtained by further training the English BERT base uncased model with more than 1 million posts from banned communites from Reddit. The model has been developed as a collaboration between the University of Groningen, the university of Turin, and the University of Passau.\n\nFor details, check out the paper presented at WOAH 2021. The code and the fine-tuned models are available on OSF.",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #fill-mask #HateBERT #text classification #abusive language #hate speech #offensive language #en #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# \nTommaso Caselli •\nValerio Basile •\nJelena Mitrovic •\nMichael Granizter",
"## Model description\n\nHateBERT is an English pre-trained BERT model obtained by further training the English BERT base uncased model with more than 1 million posts from banned communites from Reddit. The model has been developed as a collaboration between the University of Groningen, the university of Turin, and the University of Passau.\n\nFor details, check out the paper presented at WOAH 2021. The code and the fine-tuned models are available on OSF.",
"### BibTeX entry and citation info"
] | [
53,
22,
93,
10
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #fill-mask #HateBERT #text classification #abusive language #hate speech #offensive language #en #autotrain_compatible #endpoints_compatible #has_space #region-us \n# \nTommaso Caselli •\nValerio Basile •\nJelena Mitrovic •\nMichael Granizter## Model description\n\nHateBERT is an English pre-trained BERT model obtained by further training the English BERT base uncased model with more than 1 million posts from banned communites from Reddit. The model has been developed as a collaboration between the University of Groningen, the university of Turin, and the University of Passau.\n\nFor details, check out the paper presented at WOAH 2021. The code and the fine-tuned models are available on OSF.### BibTeX entry and citation info"
] |
null | null | ### The MelGAN vocoder for StyleSpeech
#### About StyleSpeech
* StyleSpeech or Meta-StyleSpeech is a model for Multi-Speaker Adaptive Text-to-Speech Generation
* The StyleSpeech model can be trained by official implementation (https://github.com/KevinMIN95/StyleSpeech).
#### About MelGAN vocoder
* This MelGAN vocoder is used to transform the mel-spectrogram back to the waveform.
* StyleSpeech is based on 16k Hz sampling rate, and there is no available 16k Hz multi-speaker vocoder.
* Thus I train this vocoder from scratch using Libri-TTS train-100 hour dataset. The training pipeline is the same as the official MelGAN (https://github.com/descriptinc/melgan-neurips).
* The synthesized sounds are close to the official demo with good quality.
#### Usage
* Please follow the official MelGAN (https://github.com/descriptinc/melgan-neurips) to load pre-trained checkpoint and convert your mel-spectrogram back to the waveform.
#### Training Details
* GPU: RTX 2080Ti
* Training epoch: 3000
| {} | Guan-Ting/StyleSpeech-MelGAN-vocoder-16kHz | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| ### The MelGAN vocoder for StyleSpeech
#### About StyleSpeech
* StyleSpeech or Meta-StyleSpeech is a model for Multi-Speaker Adaptive Text-to-Speech Generation
* The StyleSpeech model can be trained by official implementation (URL
#### About MelGAN vocoder
* This MelGAN vocoder is used to transform the mel-spectrogram back to the waveform.
* StyleSpeech is based on 16k Hz sampling rate, and there is no available 16k Hz multi-speaker vocoder.
* Thus I train this vocoder from scratch using Libri-TTS train-100 hour dataset. The training pipeline is the same as the official MelGAN (URL
* The synthesized sounds are close to the official demo with good quality.
#### Usage
* Please follow the official MelGAN (URL to load pre-trained checkpoint and convert your mel-spectrogram back to the waveform.
#### Training Details
* GPU: RTX 2080Ti
* Training epoch: 3000
| [
"### The MelGAN vocoder for StyleSpeech",
"#### About StyleSpeech\n* StyleSpeech or Meta-StyleSpeech is a model for Multi-Speaker Adaptive Text-to-Speech Generation\n* The StyleSpeech model can be trained by official implementation (URL",
"#### About MelGAN vocoder\n* This MelGAN vocoder is used to transform the mel-spectrogram back to the waveform. \n* StyleSpeech is based on 16k Hz sampling rate, and there is no available 16k Hz multi-speaker vocoder.\n* Thus I train this vocoder from scratch using Libri-TTS train-100 hour dataset. The training pipeline is the same as the official MelGAN (URL\n* The synthesized sounds are close to the official demo with good quality.",
"#### Usage\n* Please follow the official MelGAN (URL to load pre-trained checkpoint and convert your mel-spectrogram back to the waveform.",
"#### Training Details \n* GPU: RTX 2080Ti\n* Training epoch: 3000"
] | [
"TAGS\n#region-us \n",
"### The MelGAN vocoder for StyleSpeech",
"#### About StyleSpeech\n* StyleSpeech or Meta-StyleSpeech is a model for Multi-Speaker Adaptive Text-to-Speech Generation\n* The StyleSpeech model can be trained by official implementation (URL",
"#### About MelGAN vocoder\n* This MelGAN vocoder is used to transform the mel-spectrogram back to the waveform. \n* StyleSpeech is based on 16k Hz sampling rate, and there is no available 16k Hz multi-speaker vocoder.\n* Thus I train this vocoder from scratch using Libri-TTS train-100 hour dataset. The training pipeline is the same as the official MelGAN (URL\n* The synthesized sounds are close to the official demo with good quality.",
"#### Usage\n* Please follow the official MelGAN (URL to load pre-trained checkpoint and convert your mel-spectrogram back to the waveform.",
"#### Training Details \n* GPU: RTX 2080Ti\n* Training epoch: 3000"
] | [
5,
13,
47,
112,
35,
20
] | [
"TAGS\n#region-us \n### The MelGAN vocoder for StyleSpeech#### About StyleSpeech\n* StyleSpeech or Meta-StyleSpeech is a model for Multi-Speaker Adaptive Text-to-Speech Generation\n* The StyleSpeech model can be trained by official implementation (URL#### About MelGAN vocoder\n* This MelGAN vocoder is used to transform the mel-spectrogram back to the waveform. \n* StyleSpeech is based on 16k Hz sampling rate, and there is no available 16k Hz multi-speaker vocoder.\n* Thus I train this vocoder from scratch using Libri-TTS train-100 hour dataset. The training pipeline is the same as the official MelGAN (URL\n* The synthesized sounds are close to the official demo with good quality.#### Usage\n* Please follow the official MelGAN (URL to load pre-trained checkpoint and convert your mel-spectrogram back to the waveform.#### Training Details \n* GPU: RTX 2080Ti\n* Training epoch: 3000"
] |
text-generation | transformers |
# Rick Sanchez DialoGPT Model | {"tags": ["conversational"]} | Guard-SK/DialoGPT-medium-ricksanchez | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick Sanchez DialoGPT Model | [
"# Rick Sanchez DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick Sanchez DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Rick Sanchez DialoGPT Model"
] |
text-generation | transformers |
#Rick Sanchez DialoGPT Model | {"tags": ["conversational"]} | Guard-SK/DialoGPT-small-ricksanchez | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Rick Sanchez DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Game of Thrones DialoGPT Model | {"tags": ["conversational"]} | GunjanPantha/DialoGPT-small-gameofthrones | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Game of Thrones DialoGPT Model | [
"# Game of Thrones DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Game of Thrones DialoGPT Model"
] | [
39,
9
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Game of Thrones DialoGPT Model"
] |
text-to-speech | espnet |
## ESPnet2 TTS model
### `GunnarThor/talromur_f_tacotron2`
This model was trained by Gunnar Thor using talromur recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 81522029063e42ce807d9d145b64d3f9aca45987
pip install -e .
cd egs2/talromur/tts1
./run.sh --skip_data_prep false --skip_train true --download_model GunnarThor/talromur_f_tacotron2
```
## TTS config
<details><summary>expand</summary>
```
config: ./conf/tuning/train_tacotron2.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp_f/tts_train_tacotron2_raw_phn_none
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 55005
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 200
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 500
batch_size: 20
valid_batch_size: null
batch_bins: 5120000
valid_batch_bins: null
train_shape_file:
- exp_f/tts_stats_raw_phn_none/train/text_shape.phn
- exp_f/tts_stats_raw_phn_none/train/speech_shape
valid_shape_file:
- exp_f/tts_stats_raw_phn_none/valid/text_shape.phn
- exp_f/tts_stats_raw_phn_none/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_f_phn/text
- text
- text
- - dump/raw/train_f_phn/wav.scp
- speech
- sound
valid_data_path_and_name_and_type:
- - dump/raw/dev_f_phn/text
- text
- text
- - dump/raw/dev_f_phn/wav.scp
- speech
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-06
weight_decay: 0.0
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ','
- .
- r
- t
- n
- a0
- s
- I0
- D
- l
- Y0
- m
- v
- h
- k
- E1
- a:1
- E:1
- f
- G
- j
- a1
- T
- p
- c
- au:1
- E0
- i:1
- O:1
- I:1
- I1
- r_0
- t_h
- k_h
- Y1
- ei1
- i0
- ei:1
- ou:1
- u:1
- O1
- N
- l_0
- '91'
- ai0
- au1
- ou0
- ai:1
- n_0
- ei0
- O0
- ou1
- i1
- '9:1'
- ai1
- '90'
- au0
- x
- c_h
- 9i:1
- C
- p_h
- u0
- Y:1
- J
- 9i1
- u1
- 9i0
- N_0
- m_0
- J_0
- Yi0
- Oi1
- Yi1
- Oi0
- au:0
- '9:0'
- E:0
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 22050
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp_f/tts_stats_raw_phn_none/train/feats_stats.npz
tts: tacotron2
tts_conf:
embed_dim: 512
elayers: 1
eunits: 512
econv_layers: 3
econv_chans: 512
econv_filts: 5
atype: location
adim: 512
aconv_chans: 32
aconv_filts: 15
cumulate_att_w: true
dlayers: 2
dunits: 1024
prenet_layers: 2
prenet_units: 256
postnet_layers: 5
postnet_chans: 512
postnet_filts: 5
output_activation: null
use_batch_norm: true
use_concate: true
use_residual: false
dropout_rate: 0.5
zoneout_rate: 0.1
reduction_factor: 1
spk_embed_dim: null
use_masking: true
bce_pos_weight: 5.0
use_guided_attn_loss: true
guided_attn_loss_sigma: 0.4
guided_attn_loss_lambda: 1.0
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: 0.10.5a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["talromur"]} | GunnarThor/talromur_f_tacotron2 | null | [
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:talromur",
"arxiv:1804.00015",
"license:cc-by-4.0",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1804.00015"
] | [
"en"
] | TAGS
#espnet #audio #text-to-speech #en #dataset-talromur #arxiv-1804.00015 #license-cc-by-4.0 #has_space #region-us
|
## ESPnet2 TTS model
### 'GunnarThor/talromur_f_tacotron2'
This model was trained by Gunnar Thor using talromur recipe in espnet.
### Demo: How to use in ESPnet2
## TTS config
<details><summary>expand</summary>
</details>
### Citing ESPnet
or arXiv:
| [
"## ESPnet2 TTS model",
"### 'GunnarThor/talromur_f_tacotron2'\n\nThis model was trained by Gunnar Thor using talromur recipe in espnet.",
"### Demo: How to use in ESPnet2",
"## TTS config\n\n<details><summary>expand</summary>\n\n\n\n</details>",
"### Citing ESPnet\n\n\n\nor arXiv:"
] | [
"TAGS\n#espnet #audio #text-to-speech #en #dataset-talromur #arxiv-1804.00015 #license-cc-by-4.0 #has_space #region-us \n",
"## ESPnet2 TTS model",
"### 'GunnarThor/talromur_f_tacotron2'\n\nThis model was trained by Gunnar Thor using talromur recipe in espnet.",
"### Demo: How to use in ESPnet2",
"## TTS config\n\n<details><summary>expand</summary>\n\n\n\n</details>",
"### Citing ESPnet\n\n\n\nor arXiv:"
] | [
48,
8,
34,
12,
22,
11
] | [
"TAGS\n#espnet #audio #text-to-speech #en #dataset-talromur #arxiv-1804.00015 #license-cc-by-4.0 #has_space #region-us \n## ESPnet2 TTS model### 'GunnarThor/talromur_f_tacotron2'\n\nThis model was trained by Gunnar Thor using talromur recipe in espnet.### Demo: How to use in ESPnet2## TTS config\n\n<details><summary>expand</summary>\n\n\n\n</details>### Citing ESPnet\n\n\n\nor arXiv:"
] |
null | null | Modified from: https://huggingface.co/pkufool/icefall_asr_aishell_conformer_ctc
1. remove unused parts by ctc greedy search for tutorial only.
| {} | GuoLiyong/cn_conformer_encoder_aishell | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| Modified from: URL
1. remove unused parts by ctc greedy search for tutorial only.
| [] | [
"TAGS\n#region-us \n"
] | [
5
] | [
"TAGS\n#region-us \n"
] |
null | null | The original link of these models is:
https://zenodo.org/record/4604066#.YKtNrqgzZPY
which is accessible by espnet utils
The are ported to this repo for users who don't have espnet dependencies.
| {} | GuoLiyong/snowfall_model_zoo | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| The original link of these models is:
URL
which is accessible by espnet utils
The are ported to this repo for users who don't have espnet dependencies.
| [] | [
"TAGS\n#region-us \n"
] | [
5
] | [
"TAGS\n#region-us \n"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-finetuned
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3101 | 1.0 | 974 | 2.0502 |
| 2.0831 | 2.0 | 1948 | 1.9627 |
| 2.0198 | 3.0 | 2922 | 1.8998 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-cased-finetuned", "results": []}]} | GusNicho/distilbert-base-cased-finetuned | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-cased-finetuned
===============================
This model is a fine-tuned version of distilbert-base-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9161
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.9.1
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.9.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.9.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
47,
112,
5,
40
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.9.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.4057
- eval_runtime: 3.7087
- eval_samples_per_second: 167.712
- eval_steps_per_second: 2.696
- epoch: 2.11
- step: 2053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "roberta-base-finetuned", "results": []}]} | GusNicho/roberta-base-finetuned | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# roberta-base-finetuned
This model is a fine-tuned version of roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.4057
- eval_runtime: 3.7087
- eval_samples_per_second: 167.712
- eval_steps_per_second: 2.696
- epoch: 2.11
- step: 2053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| [
"# roberta-base-finetuned\n\nThis model is a fine-tuned version of roberta-base on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 1.4057\n- eval_runtime: 3.7087\n- eval_samples_per_second: 167.712\n- eval_steps_per_second: 2.696\n- epoch: 2.11\n- step: 2053",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.9.1\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta-base-finetuned\n\nThis model is a fine-tuned version of roberta-base on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 1.4057\n- eval_runtime: 3.7087\n- eval_samples_per_second: 167.712\n- eval_steps_per_second: 2.696\n- epoch: 2.11\n- step: 2053",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.9.1\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] | [
41,
98,
7,
9,
9,
4,
102,
40
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# roberta-base-finetuned\n\nThis model is a fine-tuned version of roberta-base on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 1.4057\n- eval_runtime: 3.7087\n- eval_samples_per_second: 167.712\n- eval_steps_per_second: 2.696\n- epoch: 2.11\n- step: 2053## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.9.1\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
text-classification | transformers |
# DKbert-hatespeech-classification
Use this model to detect hatespeech in Danish. For details, guide and command line tool see [DK hate github](https://github.com/Guscode/DKbert-hatespeech-detection)
## Training data
Training data is from OffensEval2020 which can be found [here]( https://figshare.com/articles/dataset/Danish_Hate_Speech_Abusive_Language_data/12220805)
## Performance
The model achieves a macro F1-score of 0.78
Precision hateful: 0.77
Recall hateful: 0.49
See more on [DK hate github](https://github.com/Guscode/DKbert-hatespeech-detection)
## Training procedure
- [BOTXO Nordic Bert](https://huggingface.co/DJSammy/bert-base-danish-uncased_BotXO,ai)
- Learning rate: 1e-5,
- Batch size: 16
- Max sequence length: 128
## Project information
This model was made in collaboration between [Johan Horsmans](https://github.com/JohanHorsmans) and [Gustav Aarup Lauridsen](https://github.com/Guscode) for their Cultural Data Science Exam.
| {"language": ["da"], "license": "mit", "tags": ["Hatespeech", "Danish", "BERT"], "datasets": ["DKHate - OffensEval2020"], "Classes": ["Hateful", "Not Hateful"]} | Guscode/DKbert-hatespeech-detection | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"Hatespeech",
"Danish",
"BERT",
"da",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"da"
] | TAGS
#transformers #pytorch #tf #bert #text-classification #Hatespeech #Danish #BERT #da #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# DKbert-hatespeech-classification
Use this model to detect hatespeech in Danish. For details, guide and command line tool see DK hate github
## Training data
Training data is from OffensEval2020 which can be found here
## Performance
The model achieves a macro F1-score of 0.78
Precision hateful: 0.77
Recall hateful: 0.49
See more on DK hate github
## Training procedure
- BOTXO Nordic Bert
- Learning rate: 1e-5,
- Batch size: 16
- Max sequence length: 128
## Project information
This model was made in collaboration between Johan Horsmans and Gustav Aarup Lauridsen for their Cultural Data Science Exam.
| [
"# DKbert-hatespeech-classification\n\nUse this model to detect hatespeech in Danish. For details, guide and command line tool see DK hate github",
"## Training data\n\nTraining data is from OffensEval2020 which can be found here",
"## Performance\n\nThe model achieves a macro F1-score of 0.78 \n\nPrecision hateful: 0.77\n\nRecall hateful: 0.49\n\nSee more on DK hate github",
"## Training procedure\n\n- BOTXO Nordic Bert\n- Learning rate: 1e-5,\n- Batch size: 16\n- Max sequence length: 128",
"## Project information\n\nThis model was made in collaboration between Johan Horsmans and Gustav Aarup Lauridsen for their Cultural Data Science Exam."
] | [
"TAGS\n#transformers #pytorch #tf #bert #text-classification #Hatespeech #Danish #BERT #da #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# DKbert-hatespeech-classification\n\nUse this model to detect hatespeech in Danish. For details, guide and command line tool see DK hate github",
"## Training data\n\nTraining data is from OffensEval2020 which can be found here",
"## Performance\n\nThe model achieves a macro F1-score of 0.78 \n\nPrecision hateful: 0.77\n\nRecall hateful: 0.49\n\nSee more on DK hate github",
"## Training procedure\n\n- BOTXO Nordic Bert\n- Learning rate: 1e-5,\n- Batch size: 16\n- Max sequence length: 128",
"## Project information\n\nThis model was made in collaboration between Johan Horsmans and Gustav Aarup Lauridsen for their Cultural Data Science Exam."
] | [
45,
36,
17,
39,
30,
29
] | [
"TAGS\n#transformers #pytorch #tf #bert #text-classification #Hatespeech #Danish #BERT #da #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# DKbert-hatespeech-classification\n\nUse this model to detect hatespeech in Danish. For details, guide and command line tool see DK hate github## Training data\n\nTraining data is from OffensEval2020 which can be found here## Performance\n\nThe model achieves a macro F1-score of 0.78 \n\nPrecision hateful: 0.77\n\nRecall hateful: 0.49\n\nSee more on DK hate github## Training procedure\n\n- BOTXO Nordic Bert\n- Learning rate: 1e-5,\n- Batch size: 16\n- Max sequence length: 128## Project information\n\nThis model was made in collaboration between Johan Horsmans and Gustav Aarup Lauridsen for their Cultural Data Science Exam."
] |
text-generation | transformers |
#Batman Botty gpt model | {"tags": ["conversational"]} | Guy0/DialoGPT-small-Batmanbotty | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Batman Botty gpt model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Zero Two DialoGPT Model | {"tags": ["conversational"]} | HAttORi/DialoGPT-Medium-zerotwo | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Zero Two DialoGPT Model | [
"# Zero Two DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Zero Two DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Zero Two DialoGPT Model"
] |
text2text-generation | transformers |
## DistilLED Large CNN 16384
*distil-led-large-cnn-16384* was initialized from [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6), in a fashion similar to [allenai/led-large-16384](https://huggingface.co/allenai/led-large-16384).
To be able to process 16K tokens, *sshleifer/distilbart-cnn-12-6*'s position embedding matrix was simply copied 16 times.
This checkpoint should be loaded into `LEDForConditionalGeneration.from_pretrained`. See the [LED documentation](https://huggingface.co/transformers/model_doc/led.html) for more information. | {"language": "en", "license": "apache-2.0", "datasets": ["cnn_dailymail"]} | HHousen/distil-led-large-cnn-16384 | null | [
"transformers",
"pytorch",
"led",
"text2text-generation",
"en",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #led #text2text-generation #en #dataset-cnn_dailymail #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
## DistilLED Large CNN 16384
*distil-led-large-cnn-16384* was initialized from sshleifer/distilbart-cnn-12-6, in a fashion similar to allenai/led-large-16384.
To be able to process 16K tokens, *sshleifer/distilbart-cnn-12-6*'s position embedding matrix was simply copied 16 times.
This checkpoint should be loaded into 'LEDForConditionalGeneration.from_pretrained'. See the LED documentation for more information. | [
"## DistilLED Large CNN 16384\n\n*distil-led-large-cnn-16384* was initialized from sshleifer/distilbart-cnn-12-6, in a fashion similar to allenai/led-large-16384.\n\nTo be able to process 16K tokens, *sshleifer/distilbart-cnn-12-6*'s position embedding matrix was simply copied 16 times.\n\nThis checkpoint should be loaded into 'LEDForConditionalGeneration.from_pretrained'. See the LED documentation for more information."
] | [
"TAGS\n#transformers #pytorch #led #text2text-generation #en #dataset-cnn_dailymail #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## DistilLED Large CNN 16384\n\n*distil-led-large-cnn-16384* was initialized from sshleifer/distilbart-cnn-12-6, in a fashion similar to allenai/led-large-16384.\n\nTo be able to process 16K tokens, *sshleifer/distilbart-cnn-12-6*'s position embedding matrix was simply copied 16 times.\n\nThis checkpoint should be loaded into 'LEDForConditionalGeneration.from_pretrained'. See the LED documentation for more information."
] | [
52,
127
] | [
"TAGS\n#transformers #pytorch #led #text2text-generation #en #dataset-cnn_dailymail #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n## DistilLED Large CNN 16384\n\n*distil-led-large-cnn-16384* was initialized from sshleifer/distilbart-cnn-12-6, in a fashion similar to allenai/led-large-16384.\n\nTo be able to process 16K tokens, *sshleifer/distilbart-cnn-12-6*'s position embedding matrix was simply copied 16 times.\n\nThis checkpoint should be loaded into 'LEDForConditionalGeneration.from_pretrained'. See the LED documentation for more information."
] |
image-classification | transformers |
# household-rooms
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bathroom

#### bedroom

#### dining room

#### kitchen

#### living room
 | {"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]} | HHousen/household-rooms | null | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# household-rooms
Autogenerated by HuggingPics️
Create your own image classifier for anything by running the demo on Google Colab.
Report any issues with the demo at the github repo.
## Example Images
#### bathroom
!bathroom
#### bedroom
!bedroom
#### dining room
!dining room
#### kitchen
!kitchen
#### living room
!living room | [
"# household-rooms\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### bathroom\n\n!bathroom",
"#### bedroom\n\n!bedroom",
"#### dining room\n\n!dining room",
"#### kitchen\n\n!kitchen",
"#### living room\n\n!living room"
] | [
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# household-rooms\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### bathroom\n\n!bathroom",
"#### bedroom\n\n!bedroom",
"#### dining room\n\n!dining room",
"#### kitchen\n\n!kitchen",
"#### living room\n\n!living room"
] | [
40,
42,
4,
7,
7,
9,
7,
9
] | [
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n# household-rooms\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.## Example Images#### bathroom\n\n!bathroom#### bedroom\n\n!bedroom#### dining room\n\n!dining room#### kitchen\n\n!kitchen#### living room\n\n!living room"
] |
text-generation | transformers | basically, it makes pickup lines
https://huggingface.co/gpt2
| {} | HJK/PickupLineGenerator | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| basically, it makes pickup lines
URL
| [] | [
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
38
] | [
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | The model that generates the My little pony script
Fine tuning data: [Kaggle](https://www.kaggle.com/liury123/my-little-pony-transcript?select=clean_dialog.csv)
API page: [Ainize](https://ainize.ai/fpem123/GPT2-MyLittlePony)
Demo page: [End point](https://master-gpt2-my-little-pony-fpem123.endpoint.ainize.ai/)
### Model information
Base model: gpt-2 large
Epoch: 30
Train runtime: 4943.9641 secs
Loss: 0.0291
###===Teachable NLP===
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
| {} | HScomcom/gpt2-MyLittlePony | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| The model that generates the My little pony script
Fine tuning data: Kaggle
API page: Ainize
Demo page: End point
### Model information
Base model: gpt-2 large
Epoch: 30
Train runtime: 4943.9641 secs
Loss: 0.0291
###===Teachable NLP===
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: Teachable NLP
Tutorial: Tutorial
| [
"### Model information\n\n Base model: gpt-2 large\n Epoch: 30\n Train runtime: 4943.9641 secs\n Loss: 0.0291"
] | [
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Model information\n\n Base model: gpt-2 large\n Epoch: 30\n Train runtime: 4943.9641 secs\n Loss: 0.0291"
] | [
38,
34
] | [
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Model information\n\n Base model: gpt-2 large\n Epoch: 30\n Train runtime: 4943.9641 secs\n Loss: 0.0291"
] |
text-generation | transformers | ### Model information
Fine tuning data: https://www.kaggle.com/cuddlefish/fairy-tales
License: CC0: Public Domain
Base model: gpt-2 large
Epoch: 30
Train runtime: 17861.6048 secs
Loss: 0.0412
API page: [Ainize](https://ainize.ai/fpem123/GPT2-FairyTales?branch=master)
Demo page: [End-point](https://master-gpt2-fairy-tales-fpem123.endpoint.ainize.ai/)
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
And my other fairytale model: [showcase](https://forum.ainetwork.ai/t/teachable-nlp-gpt-2-fairy-tales/68) | {} | HScomcom/gpt2-fairytales | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| ### Model information
Fine tuning data: URL
License: CC0: Public Domain
Base model: gpt-2 large
Epoch: 30
Train runtime: 17861.6048 secs
Loss: 0.0412
API page: Ainize
Demo page: End-point
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: Teachable NLP
Tutorial: Tutorial
And my other fairytale model: showcase | [
"### Model information\n \n Fine tuning data: URL\n License: CC0: Public Domain\n Base model: gpt-2 large \n Epoch: 30\n Train runtime: 17861.6048 secs\n Loss: 0.0412\n\n\nAPI page: Ainize\n\nDemo page: End-point",
"### ===Teachable NLP=== ###\n\nTo train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.\n\nTeachable NLP: Teachable NLP\n\nTutorial: Tutorial\n\nAnd my other fairytale model: showcase"
] | [
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Model information\n \n Fine tuning data: URL\n License: CC0: Public Domain\n Base model: gpt-2 large \n Epoch: 30\n Train runtime: 17861.6048 secs\n Loss: 0.0412\n\n\nAPI page: Ainize\n\nDemo page: End-point",
"### ===Teachable NLP=== ###\n\nTo train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.\n\nTeachable NLP: Teachable NLP\n\nTutorial: Tutorial\n\nAnd my other fairytale model: showcase"
] | [
38,
57,
73
] | [
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Model information\n \n Fine tuning data: URL\n License: CC0: Public Domain\n Base model: gpt-2 large \n Epoch: 30\n Train runtime: 17861.6048 secs\n Loss: 0.0412\n\n\nAPI page: Ainize\n\nDemo page: End-point### ===Teachable NLP=== ###\n\nTo train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.\n\nTeachable NLP: Teachable NLP\n\nTutorial: Tutorial\n\nAnd my other fairytale model: showcase"
] |
text-generation | transformers | ### Model information
Fine tuning data: https://www.kaggle.com/bennijesus/lovecraft-fiction
License: CC0: Public Domain
Base model: gpt-2 large
Epoch: 30
Train runtime: 10307.3488 secs
Loss: 0.0292
API page: [Ainize](https://ainize.ai/fpem123/GPT2-LoveCraft?branch=master)
Demo page: [End-point](https://master-gpt2-love-craft-fpem123.endpoint.ainize.ai/)
### ===Teachable NLP===
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
And my other lovecraft model: [showcase](https://forum.ainetwork.ai/t/teachable-nlp-gpt-2-lovecraft/71) | {} | HScomcom/gpt2-lovecraft | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| ### Model information
Fine tuning data: URL
License: CC0: Public Domain
Base model: gpt-2 large
Epoch: 30
Train runtime: 10307.3488 secs
Loss: 0.0292
API page: Ainize
Demo page: End-point
### ===Teachable NLP===
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: Teachable NLP
Tutorial: Tutorial
And my other lovecraft model: showcase | [
"### Model information\n\n Fine tuning data: URL\n License: CC0: Public Domain\n Base model: gpt-2 large\n Epoch: 30\n Train runtime: 10307.3488 secs\n Loss: 0.0292\n\n\nAPI page: Ainize\n\nDemo page: End-point",
"### ===Teachable NLP===\n\nTo train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.\n\nTeachable NLP: Teachable NLP\n\nTutorial: Tutorial\n\nAnd my other lovecraft model: showcase"
] | [
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Model information\n\n Fine tuning data: URL\n License: CC0: Public Domain\n Base model: gpt-2 large\n Epoch: 30\n Train runtime: 10307.3488 secs\n Loss: 0.0292\n\n\nAPI page: Ainize\n\nDemo page: End-point",
"### ===Teachable NLP===\n\nTo train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.\n\nTeachable NLP: Teachable NLP\n\nTutorial: Tutorial\n\nAnd my other lovecraft model: showcase"
] | [
38,
60,
70
] | [
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Model information\n\n Fine tuning data: URL\n License: CC0: Public Domain\n Base model: gpt-2 large\n Epoch: 30\n Train runtime: 10307.3488 secs\n Loss: 0.0292\n\n\nAPI page: Ainize\n\nDemo page: End-point### ===Teachable NLP===\n\nTo train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.\n\nTeachable NLP: Teachable NLP\n\nTutorial: Tutorial\n\nAnd my other lovecraft model: showcase"
] |
null | null | This is a RainGAN model | {} | HVH/RainGAN | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| This is a RainGAN model | [] | [
"TAGS\n#region-us \n"
] | [
5
] | [
"TAGS\n#region-us \n"
] |
text-generation | transformers |
#Harry Potter DialoGPT Model | {"tags": ["conversational"]} | HackyHackyMan/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Harry Potter DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# My Awesome Model | {"tags": ["conversational"]} | Hadron/DialoGPT-medium-nino | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Awesome Model | [
"# My Awesome Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Model"
] | [
39,
4
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# My Awesome Model"
] |
text-generation | transformers |
# Peter from Your Boyfriend Game.
| {"tags": ["conversational"]} | Hallzy/Peterbot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Peter from Your Boyfriend Game.
| [
"# Peter from Your Boyfriend Game."
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Peter from Your Boyfriend Game."
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Peter from Your Boyfriend Game."
] |
text-generation | transformers |
# Jake DialoGPT-large-jake
| {"tags": ["conversational"]} | Hamas/DialoGPT-large-jake | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Jake DialoGPT-large-jake
| [
"# Jake DialoGPT-large-jake"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Jake DialoGPT-large-jake"
] | [
39,
9
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Jake DialoGPT-large-jake"
] |
text-generation | transformers |
# Jake DialoGPT-large-jake2
| {"tags": ["conversational"]} | Hamas/DialoGPT-large-jake2 | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Jake DialoGPT-large-jake2
| [
"# Jake DialoGPT-large-jake2"
] | [
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Jake DialoGPT-large-jake2"
] | [
43,
10
] | [
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Jake DialoGPT-large-jake2"
] |
text-generation | transformers |
# Jake DialoGPT-large-jake
| {"tags": ["conversational"]} | Hamas/DialoGPT-large-jake3 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Jake DialoGPT-large-jake
| [
"# Jake DialoGPT-large-jake"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Jake DialoGPT-large-jake"
] | [
39,
9
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Jake DialoGPT-large-jake"
] |
text-generation | transformers |
# Jake DialoGPT-large-jake
| {"tags": ["conversational"]} | Hamas/DialoGPT-large-jake4 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Jake DialoGPT-large-jake
| [
"# Jake DialoGPT-large-jake"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Jake DialoGPT-large-jake"
] | [
39,
9
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Jake DialoGPT-large-jake"
] |
text-generation | transformers |
#Rick DialoGPT Model | {"tags": ["conversational"]} | Hamhams/DialoGPT-small-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Rick DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
## GPT2-Home
This model is fine-tuned using GPT-2 on amazon home products metadata.
It can generate descriptions for your **home** products by getting a text prompt.
### Model description
[GPT-2](https://openai.com/blog/better-language-models/) is a large [transformer](https://arxiv.org/abs/1706.03762)-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.
### Live Demo
For testing model with special configuration, please visit [Demo](https://huggingface.co/spaces/HamidRezaAttar/gpt2-home)
### Blog Post
For more detailed information about project development please refer to my [blog post](https://hamidrezaattar.github.io/blog/markdown/2022/02/17/gpt2-home.html).
### How to use
For best experience and clean outputs, you can use Live Demo mentioned above, also you can use the notebook mentioned in my [GitHub](https://github.com/HamidRezaAttar/GPT2-Home)
You can use this model directly with a pipeline for text generation.
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
>>> tokenizer = AutoTokenizer.from_pretrained("HamidRezaAttar/gpt2-product-description-generator")
>>> model = AutoModelForCausalLM.from_pretrained("HamidRezaAttar/gpt2-product-description-generator")
>>> generator = pipeline('text-generation', model, tokenizer=tokenizer, config={'max_length':100})
>>> generated_text = generator("This bed is very comfortable.")
```
### Citation info
```bibtex
@misc{GPT2-Home,
author = {HamidReza Fatollah Zadeh Attar},
title = {GPT2-Home the English home product description generator},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/HamidRezaAttar/GPT2-Home}},
}
```
| {"language": "en", "license": "apache-2.0", "tags": ["text-generation"], "widget": [{"text": "Maximize your bedroom space without sacrificing style with the storage bed."}, {"text": "Handcrafted of solid acacia in weathered gray, our round Jozy drop-leaf dining table is a space-saving."}, {"text": "Our plush and luxurious Emmett modular sofa brings custom comfort to your living space."}]} | HamidRezaAttar/gpt2-product-description-generator | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"arxiv:1706.03762",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1706.03762"
] | [
"en"
] | TAGS
#transformers #pytorch #gpt2 #text-generation #en #arxiv-1706.03762 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
## GPT2-Home
This model is fine-tuned using GPT-2 on amazon home products metadata.
It can generate descriptions for your home products by getting a text prompt.
### Model description
GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.
### Live Demo
For testing model with special configuration, please visit Demo
### Blog Post
For more detailed information about project development please refer to my blog post.
### How to use
For best experience and clean outputs, you can use Live Demo mentioned above, also you can use the notebook mentioned in my GitHub
You can use this model directly with a pipeline for text generation.
info
| [
"## GPT2-Home\n\nThis model is fine-tuned using GPT-2 on amazon home products metadata. \nIt can generate descriptions for your home products by getting a text prompt.",
"### Model description\n\n\nGPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.",
"### Live Demo\nFor testing model with special configuration, please visit Demo",
"### Blog Post\nFor more detailed information about project development please refer to my blog post.",
"### How to use\nFor best experience and clean outputs, you can use Live Demo mentioned above, also you can use the notebook mentioned in my GitHub\n\nYou can use this model directly with a pipeline for text generation.\n\n\ninfo"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #en #arxiv-1706.03762 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"## GPT2-Home\n\nThis model is fine-tuned using GPT-2 on amazon home products metadata. \nIt can generate descriptions for your home products by getting a text prompt.",
"### Model description\n\n\nGPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.",
"### Live Demo\nFor testing model with special configuration, please visit Demo",
"### Blog Post\nFor more detailed information about project development please refer to my blog post.",
"### How to use\nFor best experience and clean outputs, you can use Live Demo mentioned above, also you can use the notebook mentioned in my GitHub\n\nYou can use this model directly with a pipeline for text generation.\n\n\ninfo"
] | [
62,
38,
117,
15,
19,
47
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #en #arxiv-1706.03762 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n## GPT2-Home\n\nThis model is fine-tuned using GPT-2 on amazon home products metadata. \nIt can generate descriptions for your home products by getting a text prompt.### Model description\n\n\nGPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.### Live Demo\nFor testing model with special configuration, please visit Demo### Blog Post\nFor more detailed information about project development please refer to my blog post.### How to use\nFor best experience and clean outputs, you can use Live Demo mentioned above, also you can use the notebook mentioned in my GitHub\n\nYou can use this model directly with a pipeline for text generation.\n\n\ninfo"
] |
null | null | Model Description | {} | Hanchen/testRepo | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| Model Description | [] | [
"TAGS\n#region-us \n"
] | [
5
] | [
"TAGS\n#region-us \n"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0612
- Precision: 0.9259
- Recall: 0.9369
- F1: 0.9314
- Accuracy: 0.9839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.243 | 1.0 | 878 | 0.0703 | 0.9134 | 0.9181 | 0.9158 | 0.9806 |
| 0.0515 | 2.0 | 1756 | 0.0609 | 0.9214 | 0.9343 | 0.9278 | 0.9832 |
| 0.0305 | 3.0 | 2634 | 0.0612 | 0.9259 | 0.9369 | 0.9314 | 0.9839 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9839229828268226}}]}]} | Hank/distilbert-base-uncased-finetuned-ner | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-ner
=====================================
This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0612
* Precision: 0.9259
* Recall: 0.9369
* F1: 0.9314
* Accuracy: 0.9839
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.9.1
* Pytorch 1.9.0+cu102
* Datasets 1.11.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] | [
55,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
# Rick from Rick & Morty DialoGPT Model | {"tags": ["conversational"]} | HansAnonymous/DialoGPT-medium-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick from Rick & Morty DialoGPT Model | [
"# Rick from Rick & Morty DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick from Rick & Morty DialoGPT Model"
] | [
39,
11
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Rick from Rick & Morty DialoGPT Model"
] |
text-generation | transformers |
# Shrek from Shrek DialoGPT Model | {"tags": ["conversational"]} | HansAnonymous/DialoGPT-small-shrek | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Shrek from Shrek DialoGPT Model | [
"# Shrek from Shrek DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Shrek from Shrek DialoGPT Model"
] | [
39,
10
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Shrek from Shrek DialoGPT Model"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7608 | 1.0 | 2334 | 3.6655 |
| 3.6335 | 2.0 | 4668 | 3.6455 |
| 3.6066 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"]} | Haotian/distilgpt2-finetuned-wikitext2 | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| distilgpt2-finetuned-wikitext2
==============================
This model is a fine-tuned version of distilgpt2 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 3.6424
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.10.2
* Pytorch 1.9.0+cu102
* Datasets 1.12.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.0\n* Tokenizers 0.10.3"
] | [
53,
103,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0### Training results### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9613
- Wer: 0.5376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.3118 | 1.96 | 100 | 2.9093 | 0.9982 |
| 2.2071 | 3.92 | 200 | 1.1737 | 0.7779 |
| 1.6098 | 5.88 | 300 | 0.9984 | 0.7015 |
| 1.4333 | 7.84 | 400 | 0.9800 | 0.6705 |
| 1.2859 | 9.8 | 500 | 0.9582 | 0.6487 |
| 1.2073 | 11.76 | 600 | 0.8841 | 0.6077 |
| 1.1417 | 13.73 | 700 | 0.9118 | 0.6343 |
| 1.0988 | 15.69 | 800 | 0.9217 | 0.6196 |
| 1.0279 | 17.65 | 900 | 0.9165 | 0.5867 |
| 0.9765 | 19.61 | 1000 | 0.9306 | 0.5978 |
| 0.9161 | 21.57 | 1100 | 0.9305 | 0.5768 |
| 0.8395 | 23.53 | 1200 | 0.9828 | 0.5819 |
| 0.8306 | 25.49 | 1300 | 0.9397 | 0.5760 |
| 0.7819 | 27.45 | 1400 | 0.9544 | 0.5742 |
| 0.7509 | 29.41 | 1500 | 0.9278 | 0.5690 |
| 0.7218 | 31.37 | 1600 | 0.9003 | 0.5587 |
| 0.6725 | 33.33 | 1700 | 0.9659 | 0.5554 |
| 0.6287 | 35.29 | 1800 | 0.9522 | 0.5561 |
| 0.6077 | 37.25 | 1900 | 0.9154 | 0.5465 |
| 0.5873 | 39.22 | 2000 | 0.9331 | 0.5469 |
| 0.5621 | 41.18 | 2100 | 0.9335 | 0.5491 |
| 0.5168 | 43.14 | 2200 | 0.9632 | 0.5458 |
| 0.5114 | 45.1 | 2300 | 0.9349 | 0.5387 |
| 0.4986 | 47.06 | 2400 | 0.9364 | 0.5380 |
| 0.4761 | 49.02 | 2500 | 0.9584 | 0.5391 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["ur"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ur", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "ur"}, "metrics": [{"type": "wer", "value": 44.13, "name": "Test WER"}]}]}]} | HarrisDePerceptron/xls-r-1b-ur | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ur",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ur"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ur #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - UR dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9613
* Wer: 0.5376
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 50
* num\_epochs: 50.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ur #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
92,
155,
5,
47
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ur #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2924
- Wer: 0.7201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 11.2783 | 4.17 | 100 | 4.6409 | 1.0 |
| 3.5578 | 8.33 | 200 | 3.1649 | 1.0 |
| 3.1279 | 12.5 | 300 | 3.0335 | 1.0 |
| 2.9944 | 16.67 | 400 | 2.9526 | 0.9983 |
| 2.9275 | 20.83 | 500 | 2.9291 | 1.0009 |
| 2.8077 | 25.0 | 600 | 2.5633 | 0.9895 |
| 2.4438 | 29.17 | 700 | 1.9045 | 0.9564 |
| 1.9659 | 33.33 | 800 | 1.4114 | 0.7960 |
| 1.7092 | 37.5 | 900 | 1.2584 | 0.7637 |
| 1.517 | 41.67 | 1000 | 1.2040 | 0.7507 |
| 1.3966 | 45.83 | 1100 | 1.1273 | 0.7463 |
| 1.3197 | 50.0 | 1200 | 1.1054 | 0.6957 |
| 1.2476 | 54.17 | 1300 | 1.1035 | 0.7001 |
| 1.1796 | 58.33 | 1400 | 1.0890 | 0.7097 |
| 1.1237 | 62.5 | 1500 | 1.0883 | 0.7167 |
| 1.0777 | 66.67 | 1600 | 1.1067 | 0.7219 |
| 1.0051 | 70.83 | 1700 | 1.1115 | 0.7236 |
| 0.9521 | 75.0 | 1800 | 1.0867 | 0.7132 |
| 0.9147 | 79.17 | 1900 | 1.0852 | 0.7210 |
| 0.8798 | 83.33 | 2000 | 1.1411 | 0.7097 |
| 0.8317 | 87.5 | 2100 | 1.1634 | 0.7018 |
| 0.7946 | 91.67 | 2200 | 1.1621 | 0.7201 |
| 0.7594 | 95.83 | 2300 | 1.1482 | 0.7036 |
| 0.729 | 100.0 | 2400 | 1.1493 | 0.7062 |
| 0.7055 | 104.17 | 2500 | 1.1726 | 0.6931 |
| 0.6622 | 108.33 | 2600 | 1.1938 | 0.7001 |
| 0.6583 | 112.5 | 2700 | 1.1832 | 0.7149 |
| 0.6299 | 116.67 | 2800 | 1.1996 | 0.7175 |
| 0.5903 | 120.83 | 2900 | 1.1986 | 0.7132 |
| 0.5816 | 125.0 | 3000 | 1.1909 | 0.7010 |
| 0.5583 | 129.17 | 3100 | 1.2079 | 0.6870 |
| 0.5392 | 133.33 | 3200 | 1.2109 | 0.7228 |
| 0.5412 | 137.5 | 3300 | 1.2353 | 0.7245 |
| 0.5136 | 141.67 | 3400 | 1.2390 | 0.7254 |
| 0.5007 | 145.83 | 3500 | 1.2273 | 0.7123 |
| 0.4883 | 150.0 | 3600 | 1.2773 | 0.7289 |
| 0.4835 | 154.17 | 3700 | 1.2678 | 0.7289 |
| 0.4568 | 158.33 | 3800 | 1.2592 | 0.7350 |
| 0.4525 | 162.5 | 3900 | 1.2705 | 0.7254 |
| 0.4379 | 166.67 | 4000 | 1.2717 | 0.7306 |
| 0.4198 | 170.83 | 4100 | 1.2618 | 0.7219 |
| 0.4216 | 175.0 | 4200 | 1.2909 | 0.7158 |
| 0.4305 | 179.17 | 4300 | 1.2808 | 0.7167 |
| 0.399 | 183.33 | 4400 | 1.2750 | 0.7193 |
| 0.3937 | 187.5 | 4500 | 1.2719 | 0.7149 |
| 0.3905 | 191.67 | 4600 | 1.2816 | 0.7158 |
| 0.3892 | 195.83 | 4700 | 1.2951 | 0.7210 |
| 0.3932 | 200.0 | 4800 | 1.2924 | 0.7201 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["ur"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]} | HarrisDePerceptron/xls-r-300m-ur-cv7 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ur",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ur"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ur #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - UR dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2924
* Wer: 0.7201
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 200.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 200.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ur #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 200.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
67,
155,
5,
47
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ur #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 200.0\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3](https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5443
- Wer: 0.7030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000388
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 750
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.7052 | 1.96 | 100 | 3.4683 | 1.0 |
| 3.2395 | 3.92 | 200 | 3.1489 | 1.0 |
| 2.9951 | 5.88 | 300 | 2.9823 | 1.0007 |
| 2.3574 | 7.84 | 400 | 1.2614 | 0.7598 |
| 1.7287 | 9.8 | 500 | 1.1817 | 0.7421 |
| 1.6144 | 11.76 | 600 | 1.1315 | 0.7321 |
| 1.5598 | 13.73 | 700 | 1.2322 | 0.7550 |
| 1.5418 | 15.69 | 800 | 1.2721 | 0.7819 |
| 1.4578 | 17.65 | 900 | 1.1710 | 0.7531 |
| 1.4311 | 19.61 | 1000 | 1.2042 | 0.7491 |
| 1.3483 | 21.57 | 1100 | 1.1702 | 0.7465 |
| 1.3078 | 23.53 | 1200 | 1.1963 | 0.7421 |
| 1.2576 | 25.49 | 1300 | 1.1501 | 0.7280 |
| 1.2173 | 27.45 | 1400 | 1.2526 | 0.7299 |
| 1.2217 | 29.41 | 1500 | 1.2479 | 0.7310 |
| 1.1536 | 31.37 | 1600 | 1.2567 | 0.7432 |
| 1.0939 | 33.33 | 1700 | 1.2801 | 0.7247 |
| 1.0745 | 35.29 | 1800 | 1.2340 | 0.7151 |
| 1.0454 | 37.25 | 1900 | 1.2372 | 0.7151 |
| 1.0101 | 39.22 | 2000 | 1.2461 | 0.7376 |
| 0.9833 | 41.18 | 2100 | 1.2553 | 0.7269 |
| 0.9314 | 43.14 | 2200 | 1.2372 | 0.7015 |
| 0.9147 | 45.1 | 2300 | 1.3035 | 0.7358 |
| 0.8758 | 47.06 | 2400 | 1.2598 | 0.7092 |
| 0.8356 | 49.02 | 2500 | 1.2557 | 0.7144 |
| 0.8105 | 50.98 | 2600 | 1.2619 | 0.7236 |
| 0.7947 | 52.94 | 2700 | 1.3994 | 0.7491 |
| 0.7623 | 54.9 | 2800 | 1.2932 | 0.7133 |
| 0.7282 | 56.86 | 2900 | 1.2799 | 0.7089 |
| 0.7108 | 58.82 | 3000 | 1.3615 | 0.7148 |
| 0.6896 | 60.78 | 3100 | 1.3129 | 0.7041 |
| 0.6496 | 62.75 | 3200 | 1.4050 | 0.6934 |
| 0.6075 | 64.71 | 3300 | 1.3571 | 0.7026 |
| 0.6242 | 66.67 | 3400 | 1.3369 | 0.7063 |
| 0.5865 | 68.63 | 3500 | 1.4368 | 0.7140 |
| 0.5721 | 70.59 | 3600 | 1.4224 | 0.7066 |
| 0.5475 | 72.55 | 3700 | 1.4798 | 0.7118 |
| 0.5086 | 74.51 | 3800 | 1.5107 | 0.7232 |
| 0.4958 | 76.47 | 3900 | 1.4849 | 0.7089 |
| 0.5046 | 78.43 | 4000 | 1.4451 | 0.7114 |
| 0.4694 | 80.39 | 4100 | 1.4674 | 0.7089 |
| 0.4386 | 82.35 | 4200 | 1.5245 | 0.7103 |
| 0.4516 | 84.31 | 4300 | 1.5032 | 0.7103 |
| 0.4113 | 86.27 | 4400 | 1.5246 | 0.7196 |
| 0.3972 | 88.24 | 4500 | 1.5318 | 0.7114 |
| 0.4006 | 90.2 | 4600 | 1.5543 | 0.6982 |
| 0.4014 | 92.16 | 4700 | 1.5442 | 0.7048 |
| 0.3672 | 94.12 | 4800 | 1.5542 | 0.7137 |
| 0.3666 | 96.08 | 4900 | 1.5414 | 0.7018 |
| 0.3574 | 98.04 | 5000 | 1.5465 | 0.7059 |
| 0.3428 | 100.0 | 5100 | 1.5443 | 0.7030 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["ur"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]} | HarrisDePerceptron/xls-r-300m-ur-cv8-hi | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ur",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ur"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ur #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
This model is a fine-tuned version of DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3 on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - UR dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5443
* Wer: 0.7030
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000388
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 750
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000388\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 750\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ur #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000388\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 750\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
67,
154,
5,
47
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ur #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000388\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 750\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [HarrisDePerceptron/xls-r-300m-ur](https://huggingface.co/HarrisDePerceptron/xls-r-300m-ur) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0517
- WER: 0.5151291512915129
- CER: 0.23689640940982254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.2991 | 1.96 | 100 | 0.9769 | 0.6627 |
| 1.3415 | 3.92 | 200 | 0.9701 | 0.6594 |
| 1.2998 | 5.88 | 300 | 0.9678 | 0.6668 |
| 1.2881 | 7.84 | 400 | 0.9650 | 0.6613 |
| 1.2369 | 9.8 | 500 | 0.9392 | 0.6502 |
| 1.2293 | 11.76 | 600 | 0.9536 | 0.6480 |
| 1.1709 | 13.73 | 700 | 0.9265 | 0.6402 |
| 1.1492 | 15.69 | 800 | 0.9636 | 0.6506 |
| 1.1044 | 17.65 | 900 | 0.9305 | 0.6351 |
| 1.0704 | 19.61 | 1000 | 0.9329 | 0.6280 |
| 1.0039 | 21.57 | 1100 | 0.9413 | 0.6295 |
| 0.9756 | 23.53 | 1200 | 0.9718 | 0.6185 |
| 0.9633 | 25.49 | 1300 | 0.9731 | 0.6133 |
| 0.932 | 27.45 | 1400 | 0.9659 | 0.6199 |
| 0.9252 | 29.41 | 1500 | 0.9766 | 0.6196 |
| 0.9172 | 31.37 | 1600 | 1.0052 | 0.6199 |
| 0.8733 | 33.33 | 1700 | 0.9955 | 0.6203 |
| 0.868 | 35.29 | 1800 | 1.0069 | 0.6240 |
| 0.8547 | 37.25 | 1900 | 0.9783 | 0.6258 |
| 0.8451 | 39.22 | 2000 | 0.9845 | 0.6052 |
| 0.8374 | 41.18 | 2100 | 0.9496 | 0.6137 |
| 0.8153 | 43.14 | 2200 | 0.9756 | 0.6122 |
| 0.8134 | 45.1 | 2300 | 0.9712 | 0.6096 |
| 0.8019 | 47.06 | 2400 | 0.9565 | 0.5970 |
| 0.7746 | 49.02 | 2500 | 0.9864 | 0.6096 |
| 0.7664 | 50.98 | 2600 | 0.9988 | 0.6092 |
| 0.7708 | 52.94 | 2700 | 1.0181 | 0.6255 |
| 0.7468 | 54.9 | 2800 | 0.9918 | 0.6148 |
| 0.7241 | 56.86 | 2900 | 1.0150 | 0.6018 |
| 0.7165 | 58.82 | 3000 | 1.0439 | 0.6063 |
| 0.7104 | 60.78 | 3100 | 1.0016 | 0.6037 |
| 0.6954 | 62.75 | 3200 | 1.0117 | 0.5970 |
| 0.6753 | 64.71 | 3300 | 1.0191 | 0.6037 |
| 0.6803 | 66.67 | 3400 | 1.0190 | 0.6033 |
| 0.661 | 68.63 | 3500 | 1.0284 | 0.6007 |
| 0.6597 | 70.59 | 3600 | 1.0060 | 0.5967 |
| 0.6398 | 72.55 | 3700 | 1.0372 | 0.6048 |
| 0.6105 | 74.51 | 3800 | 1.0048 | 0.6044 |
| 0.6164 | 76.47 | 3900 | 1.0398 | 0.6148 |
| 0.6354 | 78.43 | 4000 | 1.0272 | 0.6133 |
| 0.5952 | 80.39 | 4100 | 1.0364 | 0.6081 |
| 0.5814 | 82.35 | 4200 | 1.0418 | 0.6092 |
| 0.6079 | 84.31 | 4300 | 1.0277 | 0.5967 |
| 0.5748 | 86.27 | 4400 | 1.0362 | 0.6041 |
| 0.5624 | 88.24 | 4500 | 1.0427 | 0.6007 |
| 0.5767 | 90.2 | 4600 | 1.0370 | 0.5919 |
| 0.5793 | 92.16 | 4700 | 1.0442 | 0.6011 |
| 0.547 | 94.12 | 4800 | 1.0516 | 0.5982 |
| 0.5513 | 96.08 | 4900 | 1.0461 | 0.5989 |
| 0.5429 | 98.04 | 5000 | 1.0504 | 0.5996 |
| 0.5404 | 100.0 | 5100 | 1.0517 | 0.5967 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["ur"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ur", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "ur"}, "metrics": [{"type": "wer", "value": 47.38, "name": "Test WER"}]}]}]} | HarrisDePerceptron/xls-r-300m-ur | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ur",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ur"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ur #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #endpoints_compatible #region-us
|
This model is a fine-tuned version of HarrisDePerceptron/xls-r-300m-ur on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - UR dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0517
* WER: 0.5151291512915129
* CER: 0.23689640940982254
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ur #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
92,
155,
5,
47
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ur #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8888
- Wer: 0.6642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.1224 | 1.96 | 100 | 3.5429 | 1.0 |
| 3.2411 | 3.92 | 200 | 3.1786 | 1.0 |
| 3.1283 | 5.88 | 300 | 3.0571 | 1.0 |
| 3.0044 | 7.84 | 400 | 2.9560 | 0.9996 |
| 2.9388 | 9.8 | 500 | 2.8977 | 1.0011 |
| 2.86 | 11.76 | 600 | 2.6944 | 0.9952 |
| 2.5538 | 13.73 | 700 | 2.0967 | 0.9435 |
| 2.1214 | 15.69 | 800 | 1.4816 | 0.8428 |
| 1.8136 | 17.65 | 900 | 1.2459 | 0.8048 |
| 1.6795 | 19.61 | 1000 | 1.1232 | 0.7649 |
| 1.5571 | 21.57 | 1100 | 1.0510 | 0.7432 |
| 1.4975 | 23.53 | 1200 | 1.0298 | 0.6963 |
| 1.4485 | 25.49 | 1300 | 0.9775 | 0.7074 |
| 1.3924 | 27.45 | 1400 | 0.9798 | 0.6956 |
| 1.3604 | 29.41 | 1500 | 0.9345 | 0.7092 |
| 1.3224 | 31.37 | 1600 | 0.9535 | 0.6830 |
| 1.2816 | 33.33 | 1700 | 0.9178 | 0.6679 |
| 1.2623 | 35.29 | 1800 | 0.9249 | 0.6679 |
| 1.2421 | 37.25 | 1900 | 0.9124 | 0.6734 |
| 1.2208 | 39.22 | 2000 | 0.8962 | 0.6664 |
| 1.2145 | 41.18 | 2100 | 0.8903 | 0.6734 |
| 1.1888 | 43.14 | 2200 | 0.8883 | 0.6708 |
| 1.1933 | 45.1 | 2300 | 0.8928 | 0.6723 |
| 1.1838 | 47.06 | 2400 | 0.8868 | 0.6679 |
| 1.1634 | 49.02 | 2500 | 0.8886 | 0.6657 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["ur"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ur", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "ur"}, "metrics": [{"type": "wer", "value": 62.47, "name": "Test WER"}]}]}]} | HarrisDePerceptron/xlsr-large-53-ur | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ur",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ur"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ur #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - UR dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8888
* Wer: 0.6642
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 50
* num\_epochs: 50.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ur #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
92,
155,
5,
47
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ur #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text-generation | transformers |
# Harry Potter DailogGPT Model | {"tags": ["conversational"]} | HarryPuttar/HarryPotterDC | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DailogGPT Model | [
"# Harry Potter DailogGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DailogGPT Model"
] | [
39,
8
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DailogGPT Model"
] |
text-generation | transformers |
# Jack Sparrow GPT | {"tags": ["conversational"]} | Harshal6927/Jack_Sparrow_GPT | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Jack Sparrow GPT | [
"# Jack Sparrow GPT"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Jack Sparrow GPT"
] | [
43,
5
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# Jack Sparrow GPT"
] |
text-generation | transformers |
# Tony Stark GPT
My first AI model still learning, used small dataset so don't expect much | {"tags": ["conversational"]} | Harshal6927/Tony_Stark_GPT | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Tony Stark GPT
My first AI model still learning, used small dataset so don't expect much | [
"# Tony Stark GPT\n\nMy first AI model still learning, used small dataset so don't expect much"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Tony Stark GPT\n\nMy first AI model still learning, used small dataset so don't expect much"
] | [
39,
22
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Tony Stark GPT\n\nMy first AI model still learning, used small dataset so don't expect much"
] |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 32597818
- CO2 Emissions (in grams): 8.655894631203154
## Validation Metrics
- Loss: 0.5410276651382446
- MSE: 0.5410276651382446
- MAE: 0.5694561004638672
- R2: 0.6830431129198475
- RMSE: 0.735545814037323
- Explained Variance: 0.6834385395050049
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Harshveer/autonlp-formality_scoring_2-32597818
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Harshveer/autonlp-formality_scoring_2-32597818", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Harshveer/autonlp-formality_scoring_2-32597818", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["Harshveer/autonlp-data-formality_scoring_2"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 8.655894631203154} | Harshveer/autonlp-formality_scoring_2-32597818 | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:Harshveer/autonlp-data-formality_scoring_2",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #roberta #text-classification #autonlp #en #dataset-Harshveer/autonlp-data-formality_scoring_2 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 32597818
- CO2 Emissions (in grams): 8.655894631203154
## Validation Metrics
- Loss: 0.5410276651382446
- MSE: 0.5410276651382446
- MAE: 0.5694561004638672
- R2: 0.6830431129198475
- RMSE: 0.735545814037323
- Explained Variance: 0.6834385395050049
## Usage
You can use cURL to access this model:
Or Python API:
| [
"# Model Trained Using AutoNLP\n\n- Problem type: Single Column Regression\n- Model ID: 32597818\n- CO2 Emissions (in grams): 8.655894631203154",
"## Validation Metrics\n\n- Loss: 0.5410276651382446\n- MSE: 0.5410276651382446\n- MAE: 0.5694561004638672\n- R2: 0.6830431129198475\n- RMSE: 0.735545814037323\n- Explained Variance: 0.6834385395050049",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #autonlp #en #dataset-Harshveer/autonlp-data-formality_scoring_2 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Single Column Regression\n- Model ID: 32597818\n- CO2 Emissions (in grams): 8.655894631203154",
"## Validation Metrics\n\n- Loss: 0.5410276651382446\n- MSE: 0.5410276651382446\n- MAE: 0.5694561004638672\n- R2: 0.6830431129198475\n- RMSE: 0.735545814037323\n- Explained Variance: 0.6834385395050049",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] | [
62,
42,
91,
16
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #autonlp #en #dataset-Harshveer/autonlp-data-formality_scoring_2 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n# Model Trained Using AutoNLP\n\n- Problem type: Single Column Regression\n- Model ID: 32597818\n- CO2 Emissions (in grams): 8.655894631203154## Validation Metrics\n\n- Loss: 0.5410276651382446\n- MSE: 0.5410276651382446\n- MAE: 0.5694561004638672\n- R2: 0.6830431129198475\n- RMSE: 0.735545814037323\n- Explained Variance: 0.6834385395050049## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
automatic-speech-recognition | transformers |
# hindi_base_wav2vec2 | {"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "hi", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["Harveenchadha/indic-voice"], "model-index": [{"name": "Hindi Large", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice", "type": "common_voice", "args": "hi"}, "metrics": [{"type": "wer", "value": 22.62, "name": "Test WER"}, {"type": "cer", "value": 7.42, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice-7.0", "type": "mozilla-foundation/common_voice_7_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 19.47, "name": "Test WER"}, {"type": "cer", "value": 8.05, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice-8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 20.87, "name": "Test WER"}, {"type": "cer", "value": 9.47, "name": "Test CER"}]}]}]} | Harveenchadha/hindi_base_wav2vec2 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"hi",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:Harveenchadha/indic-voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hi"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #hi #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-Harveenchadha/indic-voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# hindi_base_wav2vec2 | [
"# hindi_base_wav2vec2"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #hi #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-Harveenchadha/indic-voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# hindi_base_wav2vec2"
] | [
94,
11
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #hi #model_for_talk #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-Harveenchadha/indic-voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n# hindi_base_wav2vec2"
] |
text2text-generation | transformers | **Work in progress** | {} | Harveenchadha/indictrans | null | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #m2m_100 #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
| Work in progress | [] | [
"TAGS\n#transformers #pytorch #m2m_100 #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
33
] | [
"TAGS\n#transformers #pytorch #m2m_100 #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | keras |
## Multimodal entailment
Author: Sayak Paul
Date created: 2021/08/08
Last modified: 2021/08/15
Description: Training a multimodal model for predicting entailment.
### What is multimodal entailment?
On social media platforms, to audit and moderate content we may want to find answers to the following questions in near real-time:
Does a given piece of information contradict the other?
Does a given piece of information imply the other?
In NLP, this task is called analyzing textual entailment. However, that's only when the information comes from text content. In practice, it's often the case the information available comes not just from text content, but from a multimodal combination of text, images, audio, video, etc. Multimodal entailment is simply the extension of textual entailment to a variety of new input modalities. | {"library_name": "keras", "tags": ["nlp"]} | Harveenchadha/model-entailment | null | [
"keras",
"nlp",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#keras #nlp #region-us
|
## Multimodal entailment
Author: Sayak Paul
Date created: 2021/08/08
Last modified: 2021/08/15
Description: Training a multimodal model for predicting entailment.
### What is multimodal entailment?
On social media platforms, to audit and moderate content we may want to find answers to the following questions in near real-time:
Does a given piece of information contradict the other?
Does a given piece of information imply the other?
In NLP, this task is called analyzing textual entailment. However, that's only when the information comes from text content. In practice, it's often the case the information available comes not just from text content, but from a multimodal combination of text, images, audio, video, etc. Multimodal entailment is simply the extension of textual entailment to a variety of new input modalities. | [
"## Multimodal entailment\nAuthor: Sayak Paul\nDate created: 2021/08/08\nLast modified: 2021/08/15\nDescription: Training a multimodal model for predicting entailment.",
"### What is multimodal entailment?\nOn social media platforms, to audit and moderate content we may want to find answers to the following questions in near real-time:\n\nDoes a given piece of information contradict the other?\nDoes a given piece of information imply the other?\nIn NLP, this task is called analyzing textual entailment. However, that's only when the information comes from text content. In practice, it's often the case the information available comes not just from text content, but from a multimodal combination of text, images, audio, video, etc. Multimodal entailment is simply the extension of textual entailment to a variety of new input modalities."
] | [
"TAGS\n#keras #nlp #region-us \n",
"## Multimodal entailment\nAuthor: Sayak Paul\nDate created: 2021/08/08\nLast modified: 2021/08/15\nDescription: Training a multimodal model for predicting entailment.",
"### What is multimodal entailment?\nOn social media platforms, to audit and moderate content we may want to find answers to the following questions in near real-time:\n\nDoes a given piece of information contradict the other?\nDoes a given piece of information imply the other?\nIn NLP, this task is called analyzing textual entailment. However, that's only when the information comes from text content. In practice, it's often the case the information available comes not just from text content, but from a multimodal combination of text, images, audio, video, etc. Multimodal entailment is simply the extension of textual entailment to a variety of new input modalities."
] | [
11,
43,
149
] | [
"TAGS\n#keras #nlp #region-us \n## Multimodal entailment\nAuthor: Sayak Paul\nDate created: 2021/08/08\nLast modified: 2021/08/15\nDescription: Training a multimodal model for predicting entailment.### What is multimodal entailment?\nOn social media platforms, to audit and moderate content we may want to find answers to the following questions in near real-time:\n\nDoes a given piece of information contradict the other?\nDoes a given piece of information imply the other?\nIn NLP, this task is called analyzing textual entailment. However, that's only when the information comes from text content. In practice, it's often the case the information available comes not just from text content, but from a multimodal combination of text, images, audio, video, etc. Multimodal entailment is simply the extension of textual entailment to a variety of new input modalities."
] |
automatic-speech-recognition | transformers |
## Spaces Demo
Check the spaces demo [here](https://huggingface.co/spaces/Harveenchadha/wav2vec2-vakyansh-hindi/tree/main)
## Pretrained Model
Fine-tuned on Multilingual Pretrained Model [CLSRIL-23](https://arxiv.org/abs/2107.07402). The original fairseq checkpoint is present [here](https://github.com/Open-Speech-EkStep/vakyansh-models). When using this model, make sure that your speech input is sampled at 16kHz.
**Note: The result from this model is without a language model so you may witness a higher WER in some cases.**
## Dataset
This model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now.
## Training Script
Models were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the [training repository](https://github.com/Open-Speech-EkStep/vakyansh-wav2vec2-experimentation).
In case you want to explore training logs on wandb they are [here](https://wandb.ai/harveenchadha/hindi_finetuning_multilingual?workspace=user-harveenchadha).
## [Colab Demo](https://colab.research.google.com/github/harveenchadha/bol/blob/main/demos/hf/hindi/hf_hindi_him_4200_demo.ipynb)
## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200")
model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
```
## Evaluation
The model can be evaluated as follows on the hindi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200")
model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids, skip_special_tokens=True)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 33.17 %
[**Colab Evaluation**](https://colab.research.google.com/github/harveenchadha/bol/blob/main/demos/hf/hindi/hf_vakyansh_hindi_him_4200_evaluation_common_voice.ipynb)
## Credits
Thanks to Ekstep Foundation for making this possible. The vakyansh team will be open sourcing speech models in all the Indic Languages. | {"language": "hi", "license": "mit", "tags": ["audio", "automatic-speech-recognition", "speech"], "metrics": ["wer"], "model-index": [{"name": "Wav2Vec2 Vakyansh Hindi Model by Harveen Chadha", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice hi", "type": "common_voice", "args": "hi"}, "metrics": [{"type": "wer", "value": 33.17, "name": "Test WER"}]}]}]} | Harveenchadha/vakyansh-wav2vec2-hindi-him-4200 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"hi",
"arxiv:2107.07402",
"license:mit",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.07402"
] | [
"hi"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #hi #arxiv-2107.07402 #license-mit #model-index #endpoints_compatible #has_space #region-us
|
## Spaces Demo
Check the spaces demo here
## Pretrained Model
Fine-tuned on Multilingual Pretrained Model CLSRIL-23. The original fairseq checkpoint is present here. When using this model, make sure that your speech input is sampled at 16kHz.
Note: The result from this model is without a language model so you may witness a higher WER in some cases.
## Dataset
This model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now.
## Training Script
Models were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the training repository.
In case you want to explore training logs on wandb they are here.
## Colab Demo
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the hindi test data of Common Voice.
Test Result: 33.17 %
Colab Evaluation
## Credits
Thanks to Ekstep Foundation for making this possible. The vakyansh team will be open sourcing speech models in all the Indic Languages. | [
"## Spaces Demo\nCheck the spaces demo here",
"## Pretrained Model\n\nFine-tuned on Multilingual Pretrained Model CLSRIL-23. The original fairseq checkpoint is present here. When using this model, make sure that your speech input is sampled at 16kHz.\n\nNote: The result from this model is without a language model so you may witness a higher WER in some cases.",
"## Dataset\n\nThis model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now.",
"## Training Script\n\nModels were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the training repository.\n\nIn case you want to explore training logs on wandb they are here.",
"## Colab Demo",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the hindi test data of Common Voice. \n\n\n\nTest Result: 33.17 %\n\nColab Evaluation",
"## Credits\nThanks to Ekstep Foundation for making this possible. The vakyansh team will be open sourcing speech models in all the Indic Languages."
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #hi #arxiv-2107.07402 #license-mit #model-index #endpoints_compatible #has_space #region-us \n",
"## Spaces Demo\nCheck the spaces demo here",
"## Pretrained Model\n\nFine-tuned on Multilingual Pretrained Model CLSRIL-23. The original fairseq checkpoint is present here. When using this model, make sure that your speech input is sampled at 16kHz.\n\nNote: The result from this model is without a language model so you may witness a higher WER in some cases.",
"## Dataset\n\nThis model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now.",
"## Training Script\n\nModels were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the training repository.\n\nIn case you want to explore training logs on wandb they are here.",
"## Colab Demo",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the hindi test data of Common Voice. \n\n\n\nTest Result: 33.17 %\n\nColab Evaluation",
"## Credits\nThanks to Ekstep Foundation for making this possible. The vakyansh team will be open sourcing speech models in all the Indic Languages."
] | [
59,
9,
75,
30,
43,
5,
18,
29,
34
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #hi #arxiv-2107.07402 #license-mit #model-index #endpoints_compatible #has_space #region-us \n## Spaces Demo\nCheck the spaces demo here## Pretrained Model\n\nFine-tuned on Multilingual Pretrained Model CLSRIL-23. The original fairseq checkpoint is present here. When using this model, make sure that your speech input is sampled at 16kHz.\n\nNote: The result from this model is without a language model so you may witness a higher WER in some cases.## Dataset\n\nThis model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now.## Training Script\n\nModels were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the training repository.\n\nIn case you want to explore training logs on wandb they are here.## Colab Demo## Usage\n\nThe model can be used directly (without a language model) as follows:## Evaluation\nThe model can be evaluated as follows on the hindi test data of Common Voice. \n\n\n\nTest Result: 33.17 %\n\nColab Evaluation## Credits\nThanks to Ekstep Foundation for making this possible. The vakyansh team will be open sourcing speech models in all the Indic Languages."
] |
automatic-speech-recognition | transformers |
Fine-tuned on Multilingual Pretrained Model [CLSRIL-23](https://arxiv.org/abs/2107.07402). The original fairseq checkpoint is present [here](https://github.com/Open-Speech-EkStep/vakyansh-models). When using this model, make sure that your speech input is sampled at 16kHz.
**Note: The result from this model is without a language model so you may witness a higher WER in some cases.**
| {"language": "pa", "license": "mit", "tags": ["audio", "automatic-speech-recognition", "speech"], "metrics": ["wer"], "model-index": [{"name": "Wav2Vec2 Vakyansh Punjabi Model by Harveen Chadha", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice hi", "type": "common_voice", "args": "pa"}, "metrics": [{"type": "wer", "value": 33.17, "name": "Test WER"}]}]}]} | Harveenchadha/vakyansh-wav2vec2-punjabi-pam-10 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pa",
"arxiv:2107.07402",
"license:mit",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.07402"
] | [
"pa"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pa #arxiv-2107.07402 #license-mit #model-index #endpoints_compatible #has_space #region-us
|
Fine-tuned on Multilingual Pretrained Model CLSRIL-23. The original fairseq checkpoint is present here. When using this model, make sure that your speech input is sampled at 16kHz.
Note: The result from this model is without a language model so you may witness a higher WER in some cases.
| [] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pa #arxiv-2107.07402 #license-mit #model-index #endpoints_compatible #has_space #region-us \n"
] | [
59
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pa #arxiv-2107.07402 #license-mit #model-index #endpoints_compatible #has_space #region-us \n"
] |
automatic-speech-recognition | transformers |
## Pretrained Model
Fine-tuned on Multilingual Pretrained Model [CLSRIL-23](https://arxiv.org/abs/2107.07402). The original fairseq checkpoint is present [here](https://github.com/Open-Speech-EkStep/vakyansh-models). When using this model, make sure that your speech input is sampled at 16kHz.
**Note: The result from this model is without a language model so you may witness a higher WER in some cases.**
## Dataset
This model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now.
## Training Script
Models were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the [training repository](https://github.com/Open-Speech-EkStep/vakyansh-wav2vec2-experimentation).
In case you want to explore training logs on wandb they are [here](https://wandb.ai/harveenchadha/tamil-finetuning-multilingual).
## [Colab Demo](https://github.com/harveenchadha/bol/blob/main/demos/hf/tamil/hf_tamil_tnm_4200_demo.ipynb)
## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250")
model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
```
## Evaluation
The model can be evaluated as follows on the hindi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ta", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250")
model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids, skip_special_tokens=True)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 53.64 %
[**Colab Evaluation**](https://github.com/harveenchadha/bol/blob/main/demos/hf/tamil/hf_vakyansh_tamil_tnm_4200_evaluation_common_voice.ipynb)
## Credits
Thanks to Ekstep Foundation for making this possible. The vakyansh team will be open sourcing speech models in all the Indic Languages. | {"language": "ta", "license": "mit", "tags": ["audio", "automatic-speech-recognition", "speech"], "metrics": ["wer"], "model-index": [{"name": "Wav2Vec2 Vakyansh Tamil Model by Harveen Chadha", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ta", "type": "common_voice", "args": "ta"}, "metrics": [{"type": "wer", "value": 53.64, "name": "Test WER"}]}]}]} | Harveenchadha/vakyansh-wav2vec2-tamil-tam-250 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"ta",
"arxiv:2107.07402",
"license:mit",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.07402"
] | [
"ta"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #ta #arxiv-2107.07402 #license-mit #model-index #endpoints_compatible #has_space #region-us
|
## Pretrained Model
Fine-tuned on Multilingual Pretrained Model CLSRIL-23. The original fairseq checkpoint is present here. When using this model, make sure that your speech input is sampled at 16kHz.
Note: The result from this model is without a language model so you may witness a higher WER in some cases.
## Dataset
This model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now.
## Training Script
Models were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the training repository.
In case you want to explore training logs on wandb they are here.
## Colab Demo
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the hindi test data of Common Voice.
Test Result: 53.64 %
Colab Evaluation
## Credits
Thanks to Ekstep Foundation for making this possible. The vakyansh team will be open sourcing speech models in all the Indic Languages. | [
"## Pretrained Model\n\nFine-tuned on Multilingual Pretrained Model CLSRIL-23. The original fairseq checkpoint is present here. When using this model, make sure that your speech input is sampled at 16kHz.\n\nNote: The result from this model is without a language model so you may witness a higher WER in some cases.",
"## Dataset\n\nThis model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now.",
"## Training Script\n\nModels were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the training repository.\n\nIn case you want to explore training logs on wandb they are here.",
"## Colab Demo",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the hindi test data of Common Voice. \n\n\n\nTest Result: 53.64 %\n\nColab Evaluation",
"## Credits\nThanks to Ekstep Foundation for making this possible. The vakyansh team will be open sourcing speech models in all the Indic Languages."
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #ta #arxiv-2107.07402 #license-mit #model-index #endpoints_compatible #has_space #region-us \n",
"## Pretrained Model\n\nFine-tuned on Multilingual Pretrained Model CLSRIL-23. The original fairseq checkpoint is present here. When using this model, make sure that your speech input is sampled at 16kHz.\n\nNote: The result from this model is without a language model so you may witness a higher WER in some cases.",
"## Dataset\n\nThis model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now.",
"## Training Script\n\nModels were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the training repository.\n\nIn case you want to explore training logs on wandb they are here.",
"## Colab Demo",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the hindi test data of Common Voice. \n\n\n\nTest Result: 53.64 %\n\nColab Evaluation",
"## Credits\nThanks to Ekstep Foundation for making this possible. The vakyansh team will be open sourcing speech models in all the Indic Languages."
] | [
59,
75,
30,
43,
5,
18,
29,
34
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #ta #arxiv-2107.07402 #license-mit #model-index #endpoints_compatible #has_space #region-us \n## Pretrained Model\n\nFine-tuned on Multilingual Pretrained Model CLSRIL-23. The original fairseq checkpoint is present here. When using this model, make sure that your speech input is sampled at 16kHz.\n\nNote: The result from this model is without a language model so you may witness a higher WER in some cases.## Dataset\n\nThis model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now.## Training Script\n\nModels were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the training repository.\n\nIn case you want to explore training logs on wandb they are here.## Colab Demo## Usage\n\nThe model can be used directly (without a language model) as follows:## Evaluation\nThe model can be evaluated as follows on the hindi test data of Common Voice. \n\n\n\nTest Result: 53.64 %\n\nColab Evaluation## Credits\nThanks to Ekstep Foundation for making this possible. The vakyansh team will be open sourcing speech models in all the Indic Languages."
] |
null | transformers |
Hindi Pretrained model on 4200 hours. [Link](https://arxiv.org/abs/2107.07402) | {"language": "hi", "license": "apache-2.0", "tags": ["hf-asr-leaderboard", "hi", "model_for_talk", "pretrained", "robust-speech-event", "speech"]} | Harveenchadha/vakyansh_hindi_base_pretrained | null | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"hf-asr-leaderboard",
"hi",
"model_for_talk",
"pretrained",
"robust-speech-event",
"speech",
"arxiv:2107.07402",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.07402"
] | [
"hi"
] | TAGS
#transformers #pytorch #wav2vec2 #pretraining #hf-asr-leaderboard #hi #model_for_talk #pretrained #robust-speech-event #speech #arxiv-2107.07402 #license-apache-2.0 #endpoints_compatible #region-us
|
Hindi Pretrained model on 4200 hours. Link | [] | [
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #hf-asr-leaderboard #hi #model_for_talk #pretrained #robust-speech-event #speech #arxiv-2107.07402 #license-apache-2.0 #endpoints_compatible #region-us \n"
] | [
76
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #hf-asr-leaderboard #hi #model_for_talk #pretrained #robust-speech-event #speech #arxiv-2107.07402 #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
feature-extraction | transformers | ## Overview
We present a CLSRIL-23 (Cross Lingual Speech Representations on Indic Languages), a self supervised learning based audio pre-trained model which learns cross
lingual speech representations from raw audio across **23 Indic languages**. It is built on top of wav2vec
2.0 which is solved by training a contrastive task over masked latent speech representations and
jointly learns the quantization of latents shared across all languages.
[Arxiv Link](https://arxiv.org/pdf/2107.07402.pdf)
[Original Repo](https://github.com/Open-Speech-EkStep/vakyansh-models) contains models in fairseq format.
## Languages in the pretraining dataset
| Language | Data (In Hrs) |
|-----------|---------------|
| Assamese | 254.9 |
| Bengali | 331.3 |
| Bodo | 26.9 |
| Dogri | 17.1 |
| English | 819.7 |
| Gujarati | 336.7 |
| Hindi | 4563.7 |
| Kannada | 451.8 |
| Kashmiri | 67.8 |
| Konkani | 36.8 |
| Maithili | 113.8 |
| Malayalam | 297.7 |
| Manipuri | 171.9 |
| Marathi | 458.2 |
| Nepali | 31.6 |
| Odia | 131.4 |
| Punjabi | 486.05 |
| Sanskrit | 58.8 |
| Santali | 6.56 |
| Sindhi | 16 |
| Tamil | 542.6 |
| Telugu | 302.8 |
| Urdu | 259.68 |
## Repo for training:
[Experimentation](https://github.com/Open-Speech-EkStep/vakyansh-wav2vec2-experimentation) platform built on top of fairseq.
| {} | Harveenchadha/wav2vec2-pretrained-clsril-23-10k | null | [
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"arxiv:2107.07402",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.07402"
] | [] | TAGS
#transformers #pytorch #wav2vec2 #feature-extraction #arxiv-2107.07402 #endpoints_compatible #region-us
| Overview
--------
We present a CLSRIL-23 (Cross Lingual Speech Representations on Indic Languages), a self supervised learning based audio pre-trained model which learns cross
lingual speech representations from raw audio across 23 Indic languages. It is built on top of wav2vec
2.0 which is solved by training a contrastive task over masked latent speech representations and
jointly learns the quantization of latents shared across all languages.
Arxiv Link
Original Repo contains models in fairseq format.
Languages in the pretraining dataset
------------------------------------
Repo for training:
------------------
Experimentation platform built on top of fairseq.
| [] | [
"TAGS\n#transformers #pytorch #wav2vec2 #feature-extraction #arxiv-2107.07402 #endpoints_compatible #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #feature-extraction #arxiv-2107.07402 #endpoints_compatible #region-us \n"
] |
text-classification | transformers |
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
## Model Details
**Model Description:**
The model is used for classifying a text as Abusive (Hatespeech and Offensive) or Normal. The model is trained using data from Gab and Twitter and Human Rationales were included as part of the training data to boost the performance. The model also has a rationale predictor head that can predict the rationales given an abusive sentence
- **Developed by:** Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee
- **Model Type:** Text Classification
- **Language(s):** English
- **License:** Apache-2.0
- **Parent Model:** See the [BERT base uncased model](https://huggingface.co/bert-base-uncased) for more information about the BERT base model.
- **Resources for more information:**
- [Research Paper](https://arxiv.org/abs/2012.10289) Accepted at AAAI 2021.
- [GitHub Repo with datatsets and models](https://github.com/punyajoy/HateXplain)
## How to Get Started with the Model
**Details of usage**
Please use the **Model_Rational_Label** class inside [models.py](models.py) to load the models. The default prediction in this hosted inference API may be wrong due to the use of different class initialisations.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
### from models.py
from models import *
tokenizer = AutoTokenizer.from_pretrained("Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two")
model = Model_Rational_Label.from_pretrained("Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two")
inputs = tokenizer('He is a great guy", return_tensors="pt")
prediction_logits, _ = model(input_ids=inputs['input_ids'],attention_mask=inputs['attention_mask'])
```
## Uses
#### Direct Use
This model can be used for Text Classification
#### Downstream Use
[More information needed]
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
(and if you can generate an example of a biased prediction, also something like this):
Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For 
The model author's also note in their HateXplain paper that they
> *have not considered any external context such as profile bio, user gender, history of posts etc., which might be helpful in the classification task. Also, in this work we have focused on the English language. It does not consider multilingual hate speech into account.*
#### Training Procedure
##### Preprocessing
The authors detail their preprocessing procedure in the [Github repository](https://github.com/hate-alert/HateXplain/tree/master/Preprocess)
## Evaluation
The mode authors detail the Hidden layer size and attention for the HateXplain fien tuned models in the [associated paper](https://arxiv.org/pdf/2012.10289.pdf)
#### Results
The model authors both in their paper and in the git repository provide the illustrative output of the BERT - HateXplain in comparison to BERT and and other HateXplain fine tuned 
## Citation Information
```bibtex
@article{mathew2020hatexplain,
title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection},
author={Mathew, Binny and Saha, Punyajoy and Yimam, Seid Muhie and Biemann, Chris and Goyal, Pawan and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2012.10289},
year={2020}
}
```
| {"language": "en", "license": "apache-2.0", "datasets": ["hatexplain"]} | Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"dataset:hatexplain",
"arxiv:2012.10289",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2012.10289"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #text-classification #en #dataset-hatexplain #arxiv-2012.10289 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
## Table of Contents
- Model Details
- How to Get Started With the Model
- Uses
- Risks, Limitations and Biases
- Training
- Evaluation
- Technical Specifications
- Citation Information
## Model Details
Model Description:
The model is used for classifying a text as Abusive (Hatespeech and Offensive) or Normal. The model is trained using data from Gab and Twitter and Human Rationales were included as part of the training data to boost the performance. The model also has a rationale predictor head that can predict the rationales given an abusive sentence
- Developed by: Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee
- Model Type: Text Classification
- Language(s): English
- License: Apache-2.0
- Parent Model: See the BERT base uncased model for more information about the BERT base model.
- Resources for more information:
- Research Paper Accepted at AAAI 2021.
- GitHub Repo with datatsets and models
## How to Get Started with the Model
Details of usage
Please use the Model_Rational_Label class inside URL to load the models. The default prediction in this hosted inference API may be wrong due to the use of different class initialisations.
## Uses
#### Direct Use
This model can be used for Text Classification
#### Downstream Use
[More information needed]
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
(and if you can generate an example of a biased prediction, also something like this):
Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For !example:
The model author's also note in their HateXplain paper that they
> *have not considered any external context such as profile bio, user gender, history of posts etc., which might be helpful in the classification task. Also, in this work we have focused on the English language. It does not consider multilingual hate speech into account.*
#### Training Procedure
##### Preprocessing
The authors detail their preprocessing procedure in the Github repository
## Evaluation
The mode authors detail the Hidden layer size and attention for the HateXplain fien tuned models in the associated paper
#### Results
The model authors both in their paper and in the git repository provide the illustrative output of the BERT - HateXplain in comparison to BERT and and other HateXplain fine tuned !models
| [
"## Table of Contents\n- Model Details\n- How to Get Started With the Model\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Technical Specifications\n- Citation Information",
"## Model Details\nModel Description: \nThe model is used for classifying a text as Abusive (Hatespeech and Offensive) or Normal. The model is trained using data from Gab and Twitter and Human Rationales were included as part of the training data to boost the performance. The model also has a rationale predictor head that can predict the rationales given an abusive sentence\n\n\n- Developed by: Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee \n- Model Type: Text Classification\n- Language(s): English\n- License: Apache-2.0\n- Parent Model: See the BERT base uncased model for more information about the BERT base model.\n- Resources for more information:\n - Research Paper Accepted at AAAI 2021.\n - GitHub Repo with datatsets and models",
"## How to Get Started with the Model\n\nDetails of usage\n\nPlease use the Model_Rational_Label class inside URL to load the models. The default prediction in this hosted inference API may be wrong due to the use of different class initialisations.",
"## Uses",
"#### Direct Use\n\nThis model can be used for Text Classification",
"#### Downstream Use\n\n[More information needed]",
"#### Misuse and Out-of-scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.",
"## Risks, Limitations and Biases\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n(and if you can generate an example of a biased prediction, also something like this): \n\nPredictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For !example: \n\nThe model author's also note in their HateXplain paper that they \n> *have not considered any external context such as profile bio, user gender, history of posts etc., which might be helpful in the classification task. Also, in this work we have focused on the English language. It does not consider multilingual hate speech into account.*",
"#### Training Procedure",
"##### Preprocessing\n\nThe authors detail their preprocessing procedure in the Github repository",
"## Evaluation\nThe mode authors detail the Hidden layer size and attention for the HateXplain fien tuned models in the associated paper",
"#### Results \n\nThe model authors both in their paper and in the git repository provide the illustrative output of the BERT - HateXplain in comparison to BERT and and other HateXplain fine tuned !models"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #en #dataset-hatexplain #arxiv-2012.10289 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## Table of Contents\n- Model Details\n- How to Get Started With the Model\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Technical Specifications\n- Citation Information",
"## Model Details\nModel Description: \nThe model is used for classifying a text as Abusive (Hatespeech and Offensive) or Normal. The model is trained using data from Gab and Twitter and Human Rationales were included as part of the training data to boost the performance. The model also has a rationale predictor head that can predict the rationales given an abusive sentence\n\n\n- Developed by: Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee \n- Model Type: Text Classification\n- Language(s): English\n- License: Apache-2.0\n- Parent Model: See the BERT base uncased model for more information about the BERT base model.\n- Resources for more information:\n - Research Paper Accepted at AAAI 2021.\n - GitHub Repo with datatsets and models",
"## How to Get Started with the Model\n\nDetails of usage\n\nPlease use the Model_Rational_Label class inside URL to load the models. The default prediction in this hosted inference API may be wrong due to the use of different class initialisations.",
"## Uses",
"#### Direct Use\n\nThis model can be used for Text Classification",
"#### Downstream Use\n\n[More information needed]",
"#### Misuse and Out-of-scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.",
"## Risks, Limitations and Biases\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n(and if you can generate an example of a biased prediction, also something like this): \n\nPredictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For !example: \n\nThe model author's also note in their HateXplain paper that they \n> *have not considered any external context such as profile bio, user gender, history of posts etc., which might be helpful in the classification task. Also, in this work we have focused on the English language. It does not consider multilingual hate speech into account.*",
"#### Training Procedure",
"##### Preprocessing\n\nThe authors detail their preprocessing procedure in the Github repository",
"## Evaluation\nThe mode authors detail the Hidden layer size and attention for the HateXplain fien tuned models in the associated paper",
"#### Results \n\nThe model authors both in their paper and in the git repository provide the illustrative output of the BERT - HateXplain in comparison to BERT and and other HateXplain fine tuned !models"
] | [
60,
35,
180,
51,
3,
14,
11,
71,
194,
6,
24,
27,
47
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #en #dataset-hatexplain #arxiv-2012.10289 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n## Table of Contents\n- Model Details\n- How to Get Started With the Model\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Technical Specifications\n- Citation Information## Model Details\nModel Description: \nThe model is used for classifying a text as Abusive (Hatespeech and Offensive) or Normal. The model is trained using data from Gab and Twitter and Human Rationales were included as part of the training data to boost the performance. The model also has a rationale predictor head that can predict the rationales given an abusive sentence\n\n\n- Developed by: Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee \n- Model Type: Text Classification\n- Language(s): English\n- License: Apache-2.0\n- Parent Model: See the BERT base uncased model for more information about the BERT base model.\n- Resources for more information:\n - Research Paper Accepted at AAAI 2021.\n - GitHub Repo with datatsets and models## How to Get Started with the Model\n\nDetails of usage\n\nPlease use the Model_Rational_Label class inside URL to load the models. The default prediction in this hosted inference API may be wrong due to the use of different class initialisations.## Uses#### Direct Use\n\nThis model can be used for Text Classification#### Downstream Use\n\n[More information needed]#### Misuse and Out-of-scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.## Risks, Limitations and Biases\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n(and if you can generate an example of a biased prediction, also something like this): \n\nPredictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For !example: \n\nThe model author's also note in their HateXplain paper that they \n> *have not considered any external context such as profile bio, user gender, history of posts etc., which might be helpful in the classification task. Also, in this work we have focused on the English language. It does not consider multilingual hate speech into account.*#### Training Procedure##### Preprocessing\n\nThe authors detail their preprocessing procedure in the Github repository## Evaluation\nThe mode authors detail the Hidden layer size and attention for the HateXplain fien tuned models in the associated paper#### Results \n\nThe model authors both in their paper and in the git repository provide the illustrative output of the BERT - HateXplain in comparison to BERT and and other HateXplain fine tuned !models"
] |
text-classification | transformers | The model is used for classifying a text as **Hatespeech**, **Offensive**, or **Normal**. The model is trained using data from Gab and Twitter and *Human Rationales* were included as part of the training data to boost the performance.
The dataset and models are available here: https://github.com/punyajoy/HateXplain
**For more details about our paper**
Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee "[HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection)". Accepted at AAAI 2021.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{mathew2020hatexplain,
title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection},
author={Mathew, Binny and Saha, Punyajoy and Yimam, Seid Muhie and Biemann, Chris and Goyal, Pawan and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2012.10289},
year={2020}
}
~~~
| {"language": "en", "license": "apache-2.0", "datasets": ["hatexplain"]} | Hate-speech-CNERG/bert-base-uncased-hatexplain | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:hatexplain",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #jax #bert #text-classification #en #dataset-hatexplain #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| The model is used for classifying a text as Hatespeech, Offensive, or Normal. The model is trained using data from Gab and Twitter and *Human Rationales* were included as part of the training data to boost the performance.
The dataset and models are available here: URL
For more details about our paper
Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee "[HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection)". Accepted at AAAI 2021.
*Please cite our paper in any published work that uses any of these resources.*
~~~
@article{mathew2020hatexplain,
title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection},
author={Mathew, Binny and Saha, Punyajoy and Yimam, Seid Muhie and Biemann, Chris and Goyal, Pawan and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2012.10289},
year={2020}
}
~~~
| [] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #en #dataset-hatexplain #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] | [
52
] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #en #dataset-hatexplain #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text-classification | transformers |
This model is used detecting **hatespeech** in **Arabic language**. The mono in the name refers to the monolingual setting, where the model is trained using only Arabic language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.877609 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| {"language": "ar", "license": "apache-2.0"} | Hate-speech-CNERG/dehatebert-mono-arabic | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"ar",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2004.06465"
] | [
"ar"
] | TAGS
#transformers #pytorch #jax #bert #text-classification #ar #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
This model is used detecting hatespeech in Arabic language. The mono in the name refers to the monolingual setting, where the model is trained using only Arabic language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.877609 for a learning rate of 2e-5. Training code can be found at this url
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "Deep Learning Models for Multilingual Hate Speech Detection". Accepted at ECML-PKDD 2020.
*Please cite our paper in any published work that uses any of these resources.*
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| [
"### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #ar #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] | [
50,
154
] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #ar #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] |
text-classification | transformers | This model is used detecting **hatespeech** in **English language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.726030 for a learning rate of 2e-5. Training code can be found here https://github.com/punyajoy/DE-LIMIT
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| {"language": "en", "license": "apache-2.0"} | Hate-speech-CNERG/dehatebert-mono-english | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2004.06465"
] | [
"en"
] | TAGS
#transformers #pytorch #jax #bert #text-classification #en #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| This model is used detecting hatespeech in English language. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.726030 for a learning rate of 2e-5. Training code can be found here URL
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "Deep Learning Models for Multilingual Hate Speech Detection". Accepted at ECML-PKDD 2020.
*Please cite our paper in any published work that uses any of these resources.*
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| [
"### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n~~~"
] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #en #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n~~~"
] | [
54,
154
] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #en #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n~~~"
] |
text-classification | transformers |
This model is used detecting **hatespeech** in **French language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.692094 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| {"language": "fr", "license": "apache-2.0"} | Hate-speech-CNERG/dehatebert-mono-french | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"fr",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2004.06465"
] | [
"fr"
] | TAGS
#transformers #pytorch #jax #bert #text-classification #fr #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
This model is used detecting hatespeech in French language. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.692094 for a learning rate of 3e-5. Training code can be found at this url
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "Deep Learning Models for Multilingual Hate Speech Detection". Accepted at ECML-PKDD 2020.
*Please cite our paper in any published work that uses any of these resources.*
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| [
"### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #fr #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] | [
50,
154
] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #fr #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] |
text-classification | transformers |
This model is used detecting **hatespeech** in **German language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.649794 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| {"language": "de", "license": "apache-2.0"} | Hate-speech-CNERG/dehatebert-mono-german | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"de",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2004.06465"
] | [
"de"
] | TAGS
#transformers #pytorch #jax #bert #text-classification #de #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
This model is used detecting hatespeech in German language. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.649794 for a learning rate of 3e-5. Training code can be found at this url
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "Deep Learning Models for Multilingual Hate Speech Detection". Accepted at ECML-PKDD 2020.
*Please cite our paper in any published work that uses any of these resources.*
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| [
"### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #de #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] | [
50,
154
] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #de #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] |
text-classification | transformers | This model is used detecting **hatespeech** in **Indonesian language**. The mono in the name refers to the monolingual setting, where the model is trained using only Arabic language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.844494 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| {} | Hate-speech-CNERG/dehatebert-mono-indonesian | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:2004.06465",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2004.06465"
] | [] | TAGS
#transformers #pytorch #jax #bert #text-classification #arxiv-2004.06465 #autotrain_compatible #endpoints_compatible #region-us
| This model is used detecting hatespeech in Indonesian language. The mono in the name refers to the monolingual setting, where the model is trained using only Arabic language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.844494 for a learning rate of 2e-5. Training code can be found at this url
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "Deep Learning Models for Multilingual Hate Speech Detection". Accepted at ECML-PKDD 2020.
*Please cite our paper in any published work that uses any of these resources.*
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| [
"### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #arxiv-2004.06465 #autotrain_compatible #endpoints_compatible #region-us \n",
"### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] | [
40,
154
] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #arxiv-2004.06465 #autotrain_compatible #endpoints_compatible #region-us \n### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] |
Subsets and Splits