repo_id
stringlengths
4
122
author
stringlengths
2
38
model_type
stringlengths
2
33
files_per_repo
int64
2
39k
downloads_30d
int64
0
33.7M
library
stringlengths
2
37
likes
int64
0
4.87k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
33
languages
stringlengths
2
1.63k
datasets
stringlengths
2
2.58k
co2
stringlengths
6
258
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
46
prs_closed
int64
0
34
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
2 classes
has_text
bool
1 class
text_length
int64
201
598k
readme
stringlengths
0
598k
MYX4567/distilbert-base-uncased-finetuned-squad
MYX4567
distilbert
12
7
transformers
1
question-answering
true
false
false
apache-2.0
null
['squad']
null
1
1
0
0
0
0
0
['generated_from_trainer']
false
true
true
1,283
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1520 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2177 | 1.0 | 5533 | 1.1565 | | 0.9472 | 2.0 | 11066 | 1.1174 | | 0.7634 | 3.0 | 16599 | 1.1520 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
MYX4567/distilgpt2-finetuned-wikitext2
MYX4567
gpt2
9
32
transformers
0
text-generation
true
false
false
apache-2.0
null
[]
null
0
0
0
0
0
0
0
['generated_from_trainer']
false
true
true
1,242
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6428 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.76 | 1.0 | 2334 | 3.6658 | | 3.6325 | 2.0 | 4668 | 3.6454 | | 3.6068 | 3.0 | 7002 | 3.6428 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
MYX4567/gpt2-wikitext2
MYX4567
gpt2
9
6
transformers
0
text-generation
true
false
false
null
null
[]
null
0
0
0
0
0
0
0
['generated_from_trainer']
false
true
true
1,206
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-wikitext2 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.3227 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.7523 | 1.0 | 2249 | 6.6652 | | 6.4134 | 2.0 | 4498 | 6.3987 | | 6.2507 | 3.0 | 6747 | 6.3227 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
Maaly/bgc-accession
Maaly
bert
46
7
transformers
0
token-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
736
bgc-accession model is a Named Entity Recognition (NER) model that identifies and annotates the accession number of biosynthetic gene clusters in texts. The model is a fine-tuned BioBERT model and the training dataset is available in https://gitlab.com/maaly7/emerald_bgcs_annotations Testing examples: 1. The genome sequences of Leptolyngbya sp. PCC 7375 (ALVN00000000) and G. sunshinyii YC6258 (NZ_CP007142.1) were obtained previously.36,59 2. K311 was sequenced (NCBI accession number: JN852959) and analyzed with FramePlot and 18 genes were predicted to be involved in echinomycin biosynthesis (Figure 2). 3. The mar cluster was sequenced and annotated and the complete sequence was deposited into Genbank (accession KF711829).
Maaly/body-site
Maaly
bert
11
13
transformers
0
token-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
1,150
body-site model is a Named Entity Recognition (NER) model that identifies and annotates the body-site of microbiome samples in texts. The model is a fine-tuned BioBERT model and the training dataset is available in https://gitlab.com/maaly7/emerald_metagenomics_annotations Testing examples: 1. Scalp hair was collected from behind the right ear, near the right retroauricular crease, and pubic hair was collected from their right pubis, near the right inguinal crease. 2. Field-collected bee samples were dissected on dry ice and separated into head, thorax (excluding legs and wings), and abdomens. 3. TSO modulate the IEC and LPMC transcriptome To gain further insights into the mechanisms of TSO treatment, we performed genome wide expression analysis on intestinal epithelial cells (IEC) and lamina propria mononuclear cells (LPMC) isolated from caecum samples by RNA sequencing (RNAseq). 4. Two catheters were bilaterally placed in the CA1 region of the hippocampus with the coordinates of 4.5 mm anterior to bregma, 1.6 mm ventral to the dura, and two directions of ± 4.0 mm from the interaural line (Park et al. 2013; Yang et al. 2013).
Maaly/host
Maaly
bert
47
29
transformers
0
token-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
736
host model is a Named Entity Recognition (NER) model that identifies and annotates the host (living organism) of microbiome samples in texts. The model is a fine-tuned BioBERT model and the training dataset is available in https://gitlab.com/maaly7/emerald_metagenomics_annotations Testing examples: 1. Turkestan cockroach nymphs (Finke, 2013) were fed to the treefrogs at a quantity of 10% of treefrog biomass twice a week. 2. Samples were collected from clinically healthy giant pandas (five females and four males) at the China Conservation and Research Center for Giant Pandas (Ya'an, China). 3. Field-collected bee samples were dissected on dry ice and separated into head, thorax (excluding legs and wings), and abdomens.
MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387
MadhurJindalWorkMail
bert
9
3
transformers
0
text-classification
true
false
false
null
['en']
['MadhurJindalWorkMail/autonlp-data-Gibb-Detect']
70.95647633212745
0
0
0
0
0
0
0
autonlp
false
true
true
1,263
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 515314387 - CO2 Emissions (in grams): 70.95647633212745 ## Validation Metrics - Loss: 0.08077705651521683 - Accuracy: 0.9760103738923709 - Macro F1: 0.9728412857204902 - Micro F1: 0.9760103738923709 - Weighted F1: 0.9759907151741426 - Macro Precision: 0.9736622407675567 - Micro Precision: 0.9760103738923709 - Weighted Precision: 0.97673611876005 - Macro Recall: 0.9728978421381711 - Micro Recall: 0.9760103738923709 - Weighted Recall: 0.9760103738923709 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
MaggieXM/deberta-base-finetuned-squad
MaggieXM
deberta
17
7
transformers
0
question-answering
true
false
false
mit
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,096
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-base-finetuned-squad This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.0001 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.0 | 2 | 5.3843 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
MaggieXM/distilbert-base-uncased-finetuned-squad
MaggieXM
distilbert
20
5
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,109
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.01 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.01 | 56 | 4.8054 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
Mahalakshmi/wav2vec2-xls-r-300m-demo-colab
Mahalakshmi
wav2vec2
11
8
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,233
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - eval_loss: 0.9475 - eval_wer: 1.0377 - eval_runtime: 70.5646 - eval_samples_per_second: 25.239 - eval_steps_per_second: 3.16 - epoch: 21.05 - step: 2000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 300 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
Maltehb/aelaectra-danish-electra-small-cased-ner-dane
Maltehb
electra
8
2,908
transformers
1
token-classification
true
true
false
mit
['da']
['DAGW']
null
0
0
0
0
0
0
0
['ælæctra', 'pytorch', 'danish', 'ELECTRA-Small', 'replaced token detection']
false
true
true
6,441
# Ælæctra - Finetuned for Named Entity Recognition on the [DaNE dataset](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) by Malte Højmark-Bertelsen. **Ælæctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models. Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings! Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.🙂 Here is an example on how to load the finetuned Ælæctra-cased model for Named Entity Recognition in [PyTorch](https://pytorch.org/) using the [🤗Transformers](https://github.com/huggingface/transformers) library: ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-danish-electra-small-cased-ner-dane") model = AutoModelForTokenClassification.from_pretrained("Maltehb/-l-ctra-danish-electra-small-cased-ner-dane") ``` ### Evaluation of current Danish Language Models Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated: | Model | Layers | Hidden Size | Params | AVG NER micro-f1 (DaNE-testset) | Average Inference Time (Sec/Epoch) | Download | | --- | --- | --- | --- | --- | --- | --- | | Ælæctra Uncased | 12 | 256 | 13.7M | 78.03 (SD = 1.28) | 10.91 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) | | Ælæctra Cased | 12 | 256 | 14.7M | 80.08 (SD = 0.26) | 10.92 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) | | DaBERT | 12 | 768 | 110M | 84.89 (SD = 0.64) | 43.03 | [Link for model](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1) | | mBERT Uncased | 12 | 768 | 167M | 80.44 (SD = 0.82) | 72.10 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip) | | mBERT Cased | 12 | 768 | 177M | 83.79 (SD = 0.91) | 70.56 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) | On [DaNE](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) without the *MISC-tag*, Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate. ### Pretraining To pretrain Ælæctra it is recommended to build a Docker Container from the [Dockerfile](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/). Next, simply follow the [pretraining notebooks](https://github.com/MalteHB/Ælæctra/tree/master/infrastructure/Dockerfile/) The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company [KMD](https://www.kmd.dk/). The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model ### Fine-tuning To fine-tune any Ælæctra model follow the [fine-tuning notebooks](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/) ### References Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. http://arxiv.org/abs/2003.10555 Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019) Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805 Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565 Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. http://arxiv.org/abs/2005.03521 #### Acknowledgements As the majority of this repository is build upon [the works](https://github.com/google-research/electra) by the team at Google who created ELECTRA, a HUGE thanks to them is in order. A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020). Furthermore, I would like to thank my supervisor [Riccardo Fusaroli](https://github.com/fusaroli) for the support with the thesis, and a special thanks goes out to [Kenneth Enevoldsen](https://github.com/KennethEnevoldsen) for his continuous feedback. Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high! #### Contact For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20ÆlæctraCasedNER) or any of the following platforms: [<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter] [<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin] [<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram] <br /> </details> [twitter]: https://twitter.com/malteH_B [instagram]: https://www.instagram.com/maltemusen/ [linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/
Maltehb/aelaectra-danish-electra-small-cased
Maltehb
electra
8
462
transformers
1
null
true
true
false
mit
['da']
['DAGW']
4009.5
0
0
0
0
1
0
1
['ælæctra', 'pytorch', 'danish', 'ELECTRA-Small', 'replaced token detection']
false
true
true
6,801
# Ælæctra - A Step Towards More Efficient Danish Natural Language Processing **Ælæctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models. Initially a cased and an uncased model are released. It was created as part of a Cognitive Science bachelor's thesis. Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings! Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.🙂 Here is an example on how to load both the cased and the uncased Ælæctra model in [PyTorch](https://pytorch.org/) using the [🤗Transformers](https://github.com/huggingface/transformers) library: ```python from transformers import AutoTokenizer, AutoModelForPreTraining tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-danish-electra-small-cased") model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-danish-electra-small-cased") ``` ```python from transformers import AutoTokenizer, AutoModelForPreTraining tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-danish-electra-small-uncased") model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-danish-electra-small-uncased") ``` ### Evaluation of current Danish Language Models Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated: | Model | Layers | Hidden Size | Params | AVG NER micro-f1 (DaNE-testset) | Average Inference Time (Sec/Epoch) | Download | | --- | --- | --- | --- | --- | --- | --- | | Ælæctra Uncased | 12 | 256 | 13.7M | 78.03 (SD = 1.28) | 10.91 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) | | Ælæctra Cased | 12 | 256 | 14.7M | 80.08 (SD = 0.26) | 10.92 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) | | DaBERT | 12 | 768 | 110M | 84.89 (SD = 0.64) | 43.03 | [Link for model](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1) | | mBERT Uncased | 12 | 768 | 167M | 80.44 (SD = 0.82) | 72.10 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip) | | mBERT Cased | 12 | 768 | 177M | 83.79 (SD = 0.91) | 70.56 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) | On [DaNE](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020), Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate. For a full description of the evaluation and specification of the model read the thesis: 'Ælæctra - A Step Towards More Efficient Danish Natural Language Processing'. ### Pretraining To pretrain Ælæctra it is recommended to build a Docker Container from the [Dockerfile](https://github.com/MalteHB/-l-ctra/blob/master/infrastructure/Dockerfile). Next, simply follow the [pretraining notebooks](https://github.com/MalteHB/-l-ctra/blob/master/notebooks/pretraining/) The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company [KMD](https://www.kmd.dk/). The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model ### Fine-tuning To fine-tune any Ælæctra model follow the [fine-tuning notebooks](https://github.com/MalteHB/-l-ctra/blob/master/notebooks/fine-tuning/) ### References Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. http://arxiv.org/abs/2003.10555 Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019) Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805 Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565 Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. http://arxiv.org/abs/2005.03521 #### Acknowledgements As the majority of this repository is build upon [the works](https://github.com/google-research/electra) by the team at Google who created ELECTRA, a HUGE thanks to them is in order. A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020). Furthermore, I would like to thank my supervisor [Riccardo Fusaroli](https://github.com/fusaroli) for the support with the thesis, and a special thanks goes out to [Kenneth Enevoldsen](https://github.com/KennethEnevoldsen) for his continuous feedback. Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high! #### Contact For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20Ælæctra) or any of the following platforms: [<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter] [<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin] [<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram] <br /> </details> [twitter]: https://twitter.com/malteH_B [instagram]: https://www.instagram.com/maltemusen/ [linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/
Maltehb/aelaectra-danish-electra-small-uncased-ner-dane
Maltehb
electra
8
12
transformers
0
token-classification
true
true
false
mit
['da']
['DAGW']
null
0
0
0
0
0
0
0
['ælæctra', 'pytorch', 'danish', 'ELECTRA-Small', 'replaced token detection']
false
true
true
6,448
# Ælæctra - Finetuned for Named Entity Recognition on the [DaNE dataset](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) by Malte Højmark-Bertelsen. **Ælæctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models. Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings! Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.🙂 Here is an example on how to load the finetuned Ælæctra-uncased model for Named Entity Recognition in [PyTorch](https://pytorch.org/) using the [🤗Transformers](https://github.com/huggingface/transformers) library: ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-danish-electra-small-uncased-ner-dane") model = AutoModelForTokenClassification.from_pretrained("Maltehb/-l-ctra-danish-electra-small-uncased-ner-dane") ``` ### Evaluation of current Danish Language Models Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated: | Model | Layers | Hidden Size | Params | AVG NER micro-f1 (DaNE-testset) | Average Inference Time (Sec/Epoch) | Download | | --- | --- | --- | --- | --- | --- | --- | | Ælæctra Uncased | 12 | 256 | 13.7M | 78.03 (SD = 1.28) | 10.91 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) | | Ælæctra Cased | 12 | 256 | 14.7M | 80.08 (SD = 0.26) | 10.92 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) | | DaBERT | 12 | 768 | 110M | 84.89 (SD = 0.64) | 43.03 | [Link for model](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1) | | mBERT Uncased | 12 | 768 | 167M | 80.44 (SD = 0.82) | 72.10 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip) | | mBERT Cased | 12 | 768 | 177M | 83.79 (SD = 0.91) | 70.56 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) | On [DaNE](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) without the *MISC-tag*, Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate. ### Pretraining To pretrain Ælæctra it is recommended to build a Docker Container from the [Dockerfile](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/). Next, simply follow the [pretraining notebooks](https://github.com/MalteHB/Ælæctra/tree/master/infrastructure/Dockerfile/) The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company [KMD](https://www.kmd.dk/). The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model ### Fine-tuning To fine-tune any Ælæctra model follow the [fine-tuning notebooks](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/) ### References Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. http://arxiv.org/abs/2003.10555 Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019) Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805 Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565 Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. http://arxiv.org/abs/2005.03521 #### Acknowledgements As the majority of this repository is build upon [the works](https://github.com/google-research/electra) by the team at Google who created ELECTRA, a HUGE thanks to them is in order. A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020). Furthermore, I would like to thank my supervisor [Riccardo Fusaroli](https://github.com/fusaroli) for the support with the thesis, and a special thanks goes out to [Kenneth Enevoldsen](https://github.com/KennethEnevoldsen) for his continuous feedback. Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high! #### Contact For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20ÆlæctraUncasedNER) or any of the following platforms: [<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter] [<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin] [<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram] <br /> </details> [twitter]: https://twitter.com/malteH_B [instagram]: https://www.instagram.com/maltemusen/ [linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/
Maltehb/aelaectra-danish-electra-small-uncased
Maltehb
electra
7
31
transformers
0
null
true
false
false
mit
['da']
['DAGW']
4009.5
0
0
0
0
0
0
0
['ælæctra', 'pytorch', 'danish', 'ELECTRA-Small', 'replaced token detection']
false
true
true
6,718
# Ælæctra - A Step Towards More Efficient Danish Natural Language Processing **Ælæctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models. Initially a cased and an uncased model are released. It was created as part of a Cognitive Science bachelor's thesis. Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings! Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.🙂 Here is an example on how to load both the cased and the uncased Ælæctra model in [PyTorch](https://pytorch.org/) using the [🤗Transformers](https://github.com/huggingface/transformers) library: ```python from transformers import AutoTokenizer, AutoModelForPreTraining tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-cased") model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-cased") ``` ```python from transformers import AutoTokenizer, AutoModelForPreTraining tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-uncased") model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-uncased") ``` ### Evaluation of current Danish Language Models Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated: | Model | Layers | Hidden Size | Params | AVG NER micro-f1 (DaNE-testset) | Average Inference Time (Sec/Epoch) | Download | | --- | --- | --- | --- | --- | --- | --- | | Ælæctra Uncased | 12 | 256 | 13.7M | 78.03 (SD = 1.28) | 10.91 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) | | Ælæctra Cased | 12 | 256 | 14.7M | 80.08 (SD = 0.26) | 10.92 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) | | DaBERT | 12 | 768 | 110M | 84.89 (SD = 0.64) | 43.03 | [Link for model](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1) | | mBERT Uncased | 12 | 768 | 167M | 80.44 (SD = 0.82) | 72.10 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip) | | mBERT Cased | 12 | 768 | 177M | 83.79 (SD = 0.91) | 70.56 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) | On [DaNE](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020), Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate. For a full description of the evaluation and specification of the model read the thesis: 'Ælæctra - A Step Towards More Efficient Danish Natural Language Processing'. ### Pretraining To pretrain Ælæctra it is recommended to build a Docker Container from the [Dockerfile](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/). Next, simply follow the [pretraining notebooks](https://github.com/MalteHB/Ælæctra/tree/master/infrastructure/Dockerfile/) The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company [KMD](https://www.kmd.dk/). The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model ### Fine-tuning To fine-tune any Ælæctra model follow the [fine-tuning notebooks](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/) ### References Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. http://arxiv.org/abs/2003.10555 Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019) Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805 Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565 Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. http://arxiv.org/abs/2005.03521 #### Acknowledgements As the majority of this repository is build upon [the works](https://github.com/google-research/electra) by the team at Google who created ELECTRA, a HUGE thanks to them is in order. A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020). Furthermore, I would like to thank my supervisor [Riccardo Fusaroli](https://github.com/fusaroli) for the support with the thesis, and a special thanks goes out to [Kenneth Enevoldsen](https://github.com/KennethEnevoldsen) for his continuous feedback. Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high! #### Contact For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20Ælæctra) or any of the following platforms: [<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter] [<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin] [<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram] <br /> </details> [twitter]: https://twitter.com/malteH_B [instagram]: https://www.instagram.com/maltemusen/ [linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/
Maltehb/danish-bert-botxo-ner-dane
Maltehb
bert
9
27
transformers
1
token-classification
true
true
true
cc-by-4.0
['da']
['common_crawl', 'wikipedia', 'dindebat.dk', 'hestenettet.dk', 'danish_OpenSubtitles']
null
0
0
0
0
0
0
0
['danish', 'bert', 'masked-lm', 'botxo']
false
true
true
2,525
# Danish BERT (version 2, uncased) by [Certainly](https://certainly.io/) (previously known as BotXO) finetuned for Named Entity Recognition on the [DaNE dataset](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) by Malte Højmark-Bertelsen. Humongous amounts of credit needs to go to [Certainly](https://certainly.io/) (previously known as BotXO), for pretraining the Danish BERT. For data and training details see their [GitHub repository](https://github.com/certainlyio/nordic_bert) or [this article](https://www.certainly.io/blog/danish-bert-model/). You can also visit their [organization page](https://huggingface.co/Certainly) on Hugging Face. It is both available in TensorFlow and Pytorch format. The original TensorFlow version can be downloaded using [this link](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1). Here is an example on how to load Danish BERT for token classification in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library: ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("Maltehb/danish-bert-botxo-ner-dane") model = AutoModelForTokenClassification.from_pretrained("Maltehb/danish-bert-botxo-ner-dane") ``` ### References Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019) Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565 #### Contact For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20DanishBERTUncasedNER) or any of the following platforms: [<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter] [<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin] [<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram] <br /> </details> [twitter]: https://twitter.com/malteH_B [instagram]: https://www.instagram.com/maltemusen/ [linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/
Maltehb/danish-bert-botxo
Maltehb
bert
13
5,694
transformers
3
fill-mask
true
true
true
cc-by-4.0
['da']
['common_crawl', 'wikipedia', 'dindebat.dk', 'hestenettet.dk', 'danishOpenSubtitles']
null
0
0
0
0
0
0
0
['danish', 'bert', 'masked-lm', 'Certainly']
false
true
true
1,055
# Danish BERT (version 2, uncased) by [Certainly](https://certainly.io/) (previously known as BotXO). All credit goes to [Certainly](https://certainly.io/) (previously known as BotXO), who developed Danish BERT. For data and training details see their [GitHub repository](https://github.com/certainlyio/nordic_bert) or [this article](https://www.certainly.io/blog/danish-bert-model/). You can also visit their [organization page](https://huggingface.co/Certainly) on Hugging Face. It is both available in TensorFlow and Pytorch format. The original TensorFlow version can be downloaded using [this link](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1). Here is an example on how to load Danish BERT in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library: ```python from transformers import AutoTokenizer, AutoModelForPreTraining tokenizer = AutoTokenizer.from_pretrained("Maltehb/danish-bert-botxo") model = AutoModelForPreTraining.from_pretrained("Maltehb/danish-bert-botxo") ```
Maniac/wav2vec2-xls-r-60-urdu
Maniac
wav2vec2
19
9
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ur']
['common_voice']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer']
true
true
true
1,536
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UR dataset. It achieves the following results on the evaluation set: - Loss: 3.8433 - Wer: 0.9852 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 1.468 | 166.67 | 500 | 3.0262 | 1.0035 | | 0.0572 | 333.33 | 1000 | 3.5352 | 0.9721 | | 0.0209 | 500.0 | 1500 | 3.7266 | 0.9834 | | 0.0092 | 666.67 | 2000 | 3.8433 | 0.9852 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
Maniac/wav2vec2-xls-r-urdu
Maniac
wav2vec2
19
10
transformers
1
automatic-speech-recognition
true
false
false
apache-2.0
['ur']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer', 'sv', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
true
true
true
1,346
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UR dataset. It achieves the following results on the evaluation set: - Loss: 1.5614 - Wer: 0.6765 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.9115 | 20.83 | 500 | 1.5400 | 0.7280 | | 0.1155 | 41.67 | 1000 | 1.5614 | 0.6765 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
Manyman3231/lowlight-enhancement
Manyman3231
null
8
0
null
0
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
577
return im def main(): st.title("Lowlight Enhancement") st.write("This is a simple lowlight enhancement app with great performance and does not require paired images to train.") st.write("The model runs at 1000/11 FPS on single GPU/CPU on images with a size of 1200*900*3") uploaded_file = st.file_uploader("Lowlight Image") if uploaded_file: data_lowlight = Image.open(uploaded_file) col1, col2 = st.columns(2) col1.write("Original (Lowlight)") col1.image(data_lowlight, caption="Lowlight Image", use_column_width=True)
Mapcar/pegasus-samsum
Mapcar
pegasus
12
5
transformers
0
text2text-generation
true
false
false
null
null
['samsum']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,258
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4844 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6936 | 0.54 | 500 | 1.4844 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
Marc/pegasus_xsum_gigaword
Marc
pegasus
11
5
transformers
0
text2text-generation
true
false
false
null
['English']
['XSUM', 'Gigaword']
null
1
1
0
0
0
0
0
[]
false
true
true
1,842
# Pegasus XSUM Gigaword ## Model description Pegasus XSUM model finetuned to Gigaword Summarization task, significantly better performance than pegasus gigaword, but still doesn't match model paper performance. ## Intended uses & limitations Produces short summaries with the coherence of the XSUM Model #### How to use ```python # You can include sample code which will be formatted ``` #### Limitations and bias Still has all the biases of any of the abstractive models, but seems a little less prone to hallucination. ## Training data Initialized with pegasus-XSUM ## Training procedure Trained for 11500 iterations on Gigaword corpus using OOB seq2seq (from hugging face using the default parameters) ## Eval results Evaluated on Gigaword test set (from hugging face using the default parameters) run_summarization.py --model_name_or_path pegasus-xsum/checkpoint-11500/ --do_predict --dataset_name gigaword --dataset_config "3.0.0" --source_prefix "summarize: " --output_dir pegasus-xsum --per_device_train_batch_size=8 --per_device_eval_batch_size=8 --overwrite_output_dir --predict_with_generate | Metric | Score | | ----------- | ----------- | | eval_rouge1 | 34.1958 | | eval_rouge2 | 15.4033 | | eval_rougeL | 31.4488 | run_summarization.py --model_name_or_path google/pegasus-gigaword --do_predict --dataset_name gigaword --dataset_config "3.0.0" --source_prefix "summarize: " --output_dir pegasus-xsum --per_device_train_batch_size=8 --per_device_eval_batch_size=8 --overwrite_output_dir --predict_with_generate | Metric | Score | | ----------- | ----------- | | eval_rouge1 | 20.8111 | | eval_rouge2 | 8.766 | | eval_rougeL | 18.4431 | ### BibTeX entry and citation info ```bibtex @inproceedings{..., year={2020} } ```
MarcBrun/ixambert-finetuned-squad-eu-en
MarcBrun
bert
8
25
transformers
1
question-answering
true
false
false
null
['en', 'es', 'eu']
['squad']
null
1
1
0
0
0
0
0
[]
false
true
true
1,761
# ixambert-base-cased finetuned for QA This is a basic implementation of the multilingual model ["ixambert-base-cased"](https://huggingface.co/ixa-ehu/ixambert-base-cased), fine-tuned on SQuAD v1.1 and an experimental version of SQuAD1.1 in Basque (1/3 size of original SQuAD1.1), that is able to answer basic factual questions in English, Spanish and Basque. ## Overview * **Language model:** ixambert-base-cased * **Languages:** English, Spanish and Basque * **Downstream task:** Extractive QA * **Training data:** SQuAD v1.1 + experimental SQuAD1.1 in Basque * **Eval data:** SQuAD v1.1 + experimental SQuAD1.1 in Basque * **Infrastructure:** 1x GeForce RTX 2080 ## Outputs The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example: ```python {'score': 0.9667195081710815, 'start': 101, 'end': 105, 'answer': '1820'} ``` ## How to use ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "MarcBrun/ixambert-finetuned-squad-eu-en" # To get predictions context = "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820" question = "When was Florence Nightingale born?" qa = pipeline("question-answering", model=model_name, tokenizer=model_name) pred = qa(question=question,context=context) # To load the model and tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Hyperparameters ``` batch_size = 8 n_epochs = 3 learning_rate = 2e-5 optimizer = AdamW lr_schedule = linear max_seq_len = 384 doc_stride = 128 ```
MarcBrun/ixambert-finetuned-squad-eu
MarcBrun
bert
8
13
transformers
0
question-answering
true
false
false
null
['en', 'es', 'eu']
null
null
1
1
0
0
0
0
0
[]
false
true
true
1,686
# ixambert-base-cased finetuned for QA This is a basic implementation of the multilingual model ["ixambert-base-cased"](https://huggingface.co/ixa-ehu/ixambert-base-cased), fine-tuned on an experimental version of SQuAD1.1 in Basque (1/3 size of original SQuAD1.1), that is able to answer basic factual questions. ## Overview * **Language model:** ixambert-base-cased * **Languages:** English, Spanish and Basque * **Downstream task:** Extractive QA * **Training data:** Experimental SQuAD1.1 in Basque * **Eval data:** Experimental SQuAD1.1 in Basque * **Infrastructure:** 1x GeForce RTX 2080 ## Outputs The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example: ```python {'score': 0.9667195081710815, 'start': 101, 'end': 105, 'answer': '1820'} ``` ## How to use ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "MarcBrun/ixambert-finetuned-squad-eu" # To get predictions context = "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820" question = "When was Florence Nightingale born?" qa = pipeline("question-answering", model=model_name, tokenizer=model_name) pred = qa(question=question,context=context) # To load the model and tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Hyperparameters ``` batch_size = 8 n_epochs = 3 learning_rate = 2e-5 optimizer = AdamW lr_schedule = linear max_seq_len = 384 doc_stride = 128 ```
MarcBrun/ixambert-finetuned-squad
MarcBrun
bert
8
13
transformers
1
question-answering
true
false
false
null
['en', 'es', 'eu']
['squad']
null
1
1
0
0
0
0
0
[]
false
true
true
1,605
# ixambert-base-cased finetuned for QA This is a basic implementation of the multilingual model ["ixambert-base-cased"](https://huggingface.co/ixa-ehu/ixambert-base-cased), fine-tuned on SQuAD v1.1, that is able to answer basic factual questions in English, Spanish and Basque. ## Overview * **Language model:** ixambert-base-cased * **Languages:** English, Spanish and Basque * **Downstream task:** Extractive QA * **Training data:** SQuAD v1.1 * **Eval data:** SQuAD v1.1 * **Infrastructure:** 1x GeForce RTX 2080 ## Outputs The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example: ```python {'score': 0.9667195081710815, 'start': 101, 'end': 105, 'answer': '1820'} ``` ## How to use ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "MarcBrun/ixambert-finetuned-squad" # To get predictions context = "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820" question = "When was Florence Nightingale born?" qa = pipeline("question-answering", model=model_name, tokenizer=model_name) pred = qa(question=question,context=context) # To load the model and tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Hyperparameters ``` batch_size = 8 n_epochs = 3 learning_rate = 2e-5 optimizer = AdamW lr_schedule = linear max_seq_len = 384 doc_stride = 128 ```
MariamD/distilbert-base-uncased-finetuned-legal_data
MariamD
distilbert
14
9
transformers
0
question-answering
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
6,232
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-legal_data This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.9101 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 26 | 5.3529 | | No log | 2.0 | 52 | 5.4226 | | No log | 3.0 | 78 | 5.2550 | | No log | 4.0 | 104 | 5.1011 | | No log | 5.0 | 130 | 5.1857 | | No log | 6.0 | 156 | 5.5119 | | No log | 7.0 | 182 | 5.4480 | | No log | 8.0 | 208 | 5.6993 | | No log | 9.0 | 234 | 5.9614 | | No log | 10.0 | 260 | 5.6987 | | No log | 11.0 | 286 | 5.6679 | | No log | 12.0 | 312 | 5.9850 | | No log | 13.0 | 338 | 5.6065 | | No log | 14.0 | 364 | 5.3162 | | No log | 15.0 | 390 | 5.7856 | | No log | 16.0 | 416 | 5.5786 | | No log | 17.0 | 442 | 5.6028 | | No log | 18.0 | 468 | 5.7649 | | No log | 19.0 | 494 | 5.5382 | | 1.8345 | 20.0 | 520 | 6.3654 | | 1.8345 | 21.0 | 546 | 5.3575 | | 1.8345 | 22.0 | 572 | 5.3808 | | 1.8345 | 23.0 | 598 | 5.9340 | | 1.8345 | 24.0 | 624 | 6.1475 | | 1.8345 | 25.0 | 650 | 6.2188 | | 1.8345 | 26.0 | 676 | 5.7651 | | 1.8345 | 27.0 | 702 | 6.2629 | | 1.8345 | 28.0 | 728 | 6.1356 | | 1.8345 | 29.0 | 754 | 5.9255 | | 1.8345 | 30.0 | 780 | 6.4252 | | 1.8345 | 31.0 | 806 | 5.6967 | | 1.8345 | 32.0 | 832 | 6.4324 | | 1.8345 | 33.0 | 858 | 6.5087 | | 1.8345 | 34.0 | 884 | 6.1113 | | 1.8345 | 35.0 | 910 | 6.7443 | | 1.8345 | 36.0 | 936 | 6.6970 | | 1.8345 | 37.0 | 962 | 6.5578 | | 1.8345 | 38.0 | 988 | 6.1963 | | 0.2251 | 39.0 | 1014 | 6.4893 | | 0.2251 | 40.0 | 1040 | 6.6347 | | 0.2251 | 41.0 | 1066 | 6.7106 | | 0.2251 | 42.0 | 1092 | 6.8129 | | 0.2251 | 43.0 | 1118 | 6.6386 | | 0.2251 | 44.0 | 1144 | 6.4134 | | 0.2251 | 45.0 | 1170 | 6.6883 | | 0.2251 | 46.0 | 1196 | 6.6406 | | 0.2251 | 47.0 | 1222 | 6.3065 | | 0.2251 | 48.0 | 1248 | 7.0281 | | 0.2251 | 49.0 | 1274 | 7.3646 | | 0.2251 | 50.0 | 1300 | 7.1086 | | 0.2251 | 51.0 | 1326 | 6.4749 | | 0.2251 | 52.0 | 1352 | 6.3303 | | 0.2251 | 53.0 | 1378 | 6.2919 | | 0.2251 | 54.0 | 1404 | 6.3855 | | 0.2251 | 55.0 | 1430 | 6.9501 | | 0.2251 | 56.0 | 1456 | 6.8714 | | 0.2251 | 57.0 | 1482 | 6.9856 | | 0.0891 | 58.0 | 1508 | 6.9910 | | 0.0891 | 59.0 | 1534 | 6.9293 | | 0.0891 | 60.0 | 1560 | 7.3493 | | 0.0891 | 61.0 | 1586 | 7.1834 | | 0.0891 | 62.0 | 1612 | 7.0479 | | 0.0891 | 63.0 | 1638 | 6.7674 | | 0.0891 | 64.0 | 1664 | 6.7553 | | 0.0891 | 65.0 | 1690 | 7.3074 | | 0.0891 | 66.0 | 1716 | 6.8071 | | 0.0891 | 67.0 | 1742 | 7.6622 | | 0.0891 | 68.0 | 1768 | 6.9555 | | 0.0891 | 69.0 | 1794 | 7.0153 | | 0.0891 | 70.0 | 1820 | 7.2085 | | 0.0891 | 71.0 | 1846 | 6.7582 | | 0.0891 | 72.0 | 1872 | 6.7989 | | 0.0891 | 73.0 | 1898 | 6.7012 | | 0.0891 | 74.0 | 1924 | 7.0088 | | 0.0891 | 75.0 | 1950 | 7.1024 | | 0.0891 | 76.0 | 1976 | 6.6968 | | 0.058 | 77.0 | 2002 | 7.5249 | | 0.058 | 78.0 | 2028 | 6.9199 | | 0.058 | 79.0 | 2054 | 7.1995 | | 0.058 | 80.0 | 2080 | 6.9349 | | 0.058 | 81.0 | 2106 | 7.4025 | | 0.058 | 82.0 | 2132 | 7.4199 | | 0.058 | 83.0 | 2158 | 6.8081 | | 0.058 | 84.0 | 2184 | 7.4777 | | 0.058 | 85.0 | 2210 | 7.1990 | | 0.058 | 86.0 | 2236 | 7.0062 | | 0.058 | 87.0 | 2262 | 7.5724 | | 0.058 | 88.0 | 2288 | 6.9362 | | 0.058 | 89.0 | 2314 | 7.1368 | | 0.058 | 90.0 | 2340 | 7.2183 | | 0.058 | 91.0 | 2366 | 6.8684 | | 0.058 | 92.0 | 2392 | 7.1433 | | 0.058 | 93.0 | 2418 | 7.2161 | | 0.058 | 94.0 | 2444 | 7.1442 | | 0.058 | 95.0 | 2470 | 7.3098 | | 0.058 | 96.0 | 2496 | 7.1264 | | 0.0512 | 97.0 | 2522 | 6.9424 | | 0.0512 | 98.0 | 2548 | 6.9155 | | 0.0512 | 99.0 | 2574 | 6.9038 | | 0.0512 | 100.0 | 2600 | 6.9101 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
MarioPenguin/bert-model-english
MarioPenguin
bert
4
5
transformers
0
text-classification
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,919
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bert-model-english This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1408 - Train Sparse Categorical Accuracy: 0.9512 - Validation Loss: nan - Validation Sparse Categorical Accuracy: 0.0 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 0.2775 | 0.8887 | nan | 0.0 | 0 | | 0.1702 | 0.9390 | nan | 0.0 | 1 | | 0.1300 | 0.9555 | nan | 0.0 | 2 | | 0.1346 | 0.9544 | nan | 0.0 | 3 | | 0.1408 | 0.9512 | nan | 0.0 | 4 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.7.0 - Datasets 1.18.3 - Tokenizers 0.11.0
MarioPenguin/bert-model-english1
MarioPenguin
bert
8
7
transformers
0
text-classification
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,462
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bert-model-english1 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0274 - Train Accuracy: 0.9914 - Validation Loss: 0.3493 - Validation Accuracy: 0.9303 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.0366 | 0.9885 | 0.3013 | 0.9299 | 0 | | 0.0261 | 0.9912 | 0.3445 | 0.9351 | 1 | | 0.0274 | 0.9914 | 0.3493 | 0.9303 | 2 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.7.0 - Datasets 1.18.3 - Tokenizers 0.11.0
MarioPenguin/beto_amazon_posneu
MarioPenguin
bert
8
5
transformers
0
text-classification
false
true
false
null
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,509
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # beto_amazon_posneu This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1277 - Train Accuracy: 0.9550 - Validation Loss: 0.3439 - Validation Accuracy: 0.8905 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3195 | 0.8712 | 0.3454 | 0.8580 | 0 | | 0.1774 | 0.9358 | 0.3258 | 0.8802 | 1 | | 0.1277 | 0.9550 | 0.3439 | 0.8905 | 2 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.7.0 - Datasets 1.18.3 - Tokenizers 0.11.0
MarioPenguin/finetuned-model
MarioPenguin
roberta
19
5
transformers
0
text-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,305
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-model This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8601 - Accuracy: 0.6117 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 84 | 0.8663 | 0.5914 | | No log | 2.0 | 168 | 0.8601 | 0.6117 | ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
MarioPenguin/roberta-model-english
MarioPenguin
roberta
9
5
transformers
0
text-classification
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,440
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-model-english This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1140 - Train Accuracy: 0.9596 - Validation Loss: 0.2166 - Validation Accuracy: 0.9301 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2922 | 0.8804 | 0.2054 | 0.9162 | 0 | | 0.1710 | 0.9352 | 0.1879 | 0.9353 | 1 | | 0.1140 | 0.9596 | 0.2166 | 0.9301 | 2 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.7.0 - Tokenizers 0.11.0
MarshallHo/albertZero-squad2-base-v2
MarshallHo
null
3
0
null
0
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
2,112
# albertZero albertZero is a PyTorch model with a prediction head fine-tuned for SQuAD 2.0. Based on Hugging Face's albert-base-v2, albertZero employs a novel method to speed up fine-tuning. It re-initializes weights of final linear layer in the shared albert transformer block, resulting in a 2% point improvement during the early epochs of fine-tuning. ## Usage albertZero can be loaded like this: ```python tokenizer = AutoTokenizer.from_pretrained('MarshallHo/albertZero-squad2-base-v2') model = AutoModel.from_pretrained('MarshallHo/albertZero-squad2-base-v2') ``` or ```python from transformers import AlbertModel, AlbertTokenizer, AlbertForQuestionAnswering, AlbertPreTrainedModel mytokenizer = AlbertTokenizer.from_pretrained('albert-base-v2') model = AlbertForQuestionAnsweringAVPool.from_pretrained('albert-base-v2') model.load_state_dict(torch.load('albertZero-squad2-base-v2.bin')) ``` ## References The goal of [ALBERT](https://arxiv.org/abs/1909.11942) is to reduce the memory requirement of the groundbreaking language model [BERT](https://arxiv.org/abs/1810.04805), while providing a similar level of performance. ALBERT mainly uses 2 methods to reduce the number of parameters – parameter sharing and factorized embedding. The field of NLP has undergone major improvements in recent years. The replacement of recurrent architectures by attention-based models has allowed NLP tasks such as question-answering to approach human level performance. In order to push the limits further, the [SQuAD2.0](https://arxiv.org/abs/1806.03822) dataset was created in 2018 with 50,000 additional unanswerable questions, addressing a major weakness of the original version of the dataset. At the time of writing, near the top of the [SQuAD2.0 leaderboard](https://rajpurkar.github.io/SQuAD-explorer/) is Shanghai Jiao Tong University’s [Retro-Reader](http://arxiv.org/abs/2001.09694). We have re-implemented their non-ensemble ALBERT model with the SQUAD2.0 prediction head. ## Acknowledgments Thanks to the generosity of the team at Hugging Face and all the groups referenced above !
Martian/Neo-GPT-Title-Generation-Electric-Car
Martian
gpt_neo
15
12
transformers
1
text-generation
true
false
false
null
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,898
# Neo-GPT-Title-Generation-Electric-Car Title generator based on Neo-GPT 125M fine-tuned on a dataset of 39k url's title. All urls are selected on the TOP 10 google on a list of Keywords about "Electric car" - "Electric car for sale". # Pipeline example ```python import pandas as pd from transformers import AutoModelForMaskedLM from transformers import GPT2Tokenizer, TrainingArguments, AutoModelForCausalLM, AutoConfig model = AutoModelForCausalLM.from_pretrained('Martian/Neo-GPT-Title-Generation-Electric-Car') tokenizer = GPT2Tokenizer.from_pretrained('Martian/Neo-GPT-Title-Generation-Electric-Car', bos_token='<|startoftext|>', eos_token='<|endoftext|>', pad_token='<|pad|>') prompt = "<|startoftext|> Electric car" input_ids = tokenizer(prompt, return_tensors="pt").input_ids gen_tokens = model.generate(input_ids, do_sample=True, top_k=100, min_length = 30, max_length=150, top_p=0.90, num_return_sequences=20, skip_special_tokens=True) list_title_gen = [] for i, sample_output in enumerate(gen_tokens): title = tokenizer.decode(sample_output, skip_special_tokens=True) list_title_gen.append(title) for i in list_title_gen: try: list_title_gen[list_title_gen.index(i)] = i.split(' | ')[0] except: continue try: list_title_gen[list_title_gen.index(i)] = i.split(' - ')[0] except: continue try: list_title_gen[list_title_gen.index(i)] = i.split(' — ')[0] except: continue list_title_gen = [sub.replace('�', ' ').replace('\\r',' ').replace('\ ',' ').replace('\\t', ' ').replace('\\xa0', '') for sub in list_title_gen] list_title_gen = [sub if sub != '<|startoftext|> Electric car' else '' for sub in list_title_gen] for i in list_title_gen: print(i) ``` # Todo - Improve the quality of the training sample - Add more data
Marxav/wav2vec2-large-xlsr-53-breton
Marxav
wav2vec2
9
20
transformers
0
automatic-speech-recognition
true
false
true
apache-2.0
['br']
['common_voice']
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
true
true
true
3,979
# wav2vec2-large-xlsr-53-breton The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor lang = "br" test_dataset = load_dataset("common_voice", lang, split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("Marxav/wav2vec2-large-xlsr-53-breton") model = Wav2Vec2ForCTC.from_pretrained("Marxav/wav2vec2-large-xlsr-53-breton") resampler = torchaudio.transforms.Resample(48_000, 16_000) chars_to_ignore_regex = '[\\,\,\?\.\!\;\:\"\“\%\”\�\(\)\/\«\»\½\…]' # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " " batch["sentence"] = re.sub("ʼ", "'", batch["sentence"]) batch["sentence"] = re.sub("’", "'", batch["sentence"]) batch["sentence"] = re.sub('‘', "'", batch["sentence"]) return batch nb_samples = 2 test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:nb_samples], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:nb_samples]) ``` The above code leads to the following prediction for the first two samples: * Prediction: ["neller ket dont a-benn eus netra la vez ser merc'hed evel sich", 'an eil hag egile'] * Reference: ["N'haller ket dont a-benn eus netra pa vezer nec'het evel-se.", 'An eil hag egile.'] The model can be evaluated as follows on the {language} test data of Common Voice. ```python import re import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor lang = 'br' test_dataset = load_dataset("common_voice", lang, split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained('Marxav/wav2vec2-large-xlsr-53-breton') model = Wav2Vec2ForCTC.from_pretrained('Marxav/wav2vec2-large-xlsr-53-breton') model.to("cuda") chars_to_ignore_regex = '[\\,\,\?\.\!\;\:\"\“\%\”\�\(\)\/\«\»\½\…]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " " batch["sentence"] = re.sub("ʼ", "'", batch["sentence"]) batch["sentence"] = re.sub("’", "'", batch["sentence"]) batch["sentence"] = re.sub('‘', "'", batch["sentence"]) speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(remove_special_characters) test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 43.43% ## Training The Common Voice `train`, `validation` datasets were used for training.
MaryaAI/opus-mt-ar-en-finetuned-ar-to-en
MaryaAI
marian
13
1,043
transformers
0
text2text-generation
true
false
false
null
null
['opus_wikipedia']
null
0
0
0
0
0
0
0
['generated_from_trainer']
false
true
true
975
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-ar-en-finetuned-ar-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on the opus_wikipedia dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
MaryaAI/opus-mt-ar-en-finetunedTanzil-v5-ar-to-en
MaryaAI
marian
9
5
transformers
0
text2text-generation
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,907
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-ar-en-finetunedTanzil-v5-ar-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.8101 - Validation Loss: 0.9477 - Train Bleu: 9.3241 - Train Gen Len: 88.73 - Train Rouge1: 56.4906 - Train Rouge2: 34.2668 - Train Rougel: 53.2279 - Train Rougelsum: 53.7836 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Bleu | Train Gen Len | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Epoch | |:----------:|:---------------:|:----------:|:-------------:|:------------:|:------------:|:------------:|:---------------:|:-----:| | 0.8735 | 0.9809 | 11.0863 | 78.68 | 56.4557 | 33.3673 | 53.4828 | 54.1197 | 0 | | 0.8408 | 0.9647 | 9.8543 | 88.955 | 57.3797 | 34.3539 | 53.8783 | 54.3714 | 1 | | 0.8101 | 0.9477 | 9.3241 | 88.73 | 56.4906 | 34.2668 | 53.2279 | 53.7836 | 2 | ### Framework versions - Transformers 4.17.0.dev0 - TensorFlow 2.7.0 - Datasets 1.18.4.dev0 - Tokenizers 0.10.3
MaryaAI/opus-mt-en-ar-finetuned-Math-13-10-en-to-ar
MaryaAI
marian
27
1,091
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['syssr_en_ar']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
983
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ar-finetuned-Math-13-10-en-to-ar This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the syssr_en_ar dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.0 - Tokenizers 0.10.3
MaryaAI/opus-mt-en-ar-finetuned-dummyData-10-10-ar-to-en
MaryaAI
marian
13
5
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['syssr_en_ar']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,605
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ar-finetuned-dummyData-10-10-ar-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the syssr_en_ar dataset. It achieves the following results on the evaluation set: - Loss: 1.2046 - Bleu: 7.9946 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 1 | 1.2038 | 7.9946 | 20.0 | | No log | 2.0 | 2 | 1.2038 | 7.9946 | 20.0 | | No log | 3.0 | 3 | 1.2038 | 7.9946 | 20.0 | | No log | 4.0 | 4 | 1.2036 | 7.9946 | 20.0 | | No log | 5.0 | 5 | 1.2046 | 7.9946 | 20.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar
MaryaAI
marian
11
5
transformers
0
text2text-generation
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,228
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.0589 - Validation Loss: 5.3227 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.0589 | 5.3227 | 0 | ### Framework versions - Transformers 4.17.0.dev0 - TensorFlow 2.7.0 - Datasets 1.18.3.dev0 - Tokenizers 0.10.3
MaryaAI/opus-mt-en-ro-finetuned-en-to-ro
MaryaAI
marian
15
20
transformers
0
text2text-generation
true
false
false
null
null
['wmt16']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,313
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ro-finetuned-en-to-ro This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.2886 - Bleu: 28.1599 - Gen Len: 34.1236 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 0.7437 | 1.0 | 38145 | 1.2886 | 28.1599 | 34.1236 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
Mathking/bert-base-german-cased-gnad10
Mathking
bert
13
198
transformers
0
text-classification
true
false
false
null
['de']
['gnad10']
null
0
0
0
0
0
0
0
['text-classification', 'german-news-classification']
false
true
true
232
# German BERT for News Classification This a bert-base-german-cased model finetuned for text classification on german news articles ## Training data Used the training set from the 10KGNAD dataset (gnad10 on HuggingFace Datasets).
MatsUy/wav2vec2-common_voice-nl-demo
MatsUy
wav2vec2
15
10
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['nl']
['common_voice']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer']
true
true
true
2,098
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-common_voice-nl-demo This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - NL dataset. It achieves the following results on the evaluation set: - Loss: 0.3523 - Wer: 0.2046 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0536 | 1.12 | 500 | 0.5349 | 0.4338 | | 0.2543 | 2.24 | 1000 | 0.3859 | 0.3029 | | 0.1472 | 3.36 | 1500 | 0.3471 | 0.2818 | | 0.1088 | 4.47 | 2000 | 0.3489 | 0.2731 | | 0.0855 | 5.59 | 2500 | 0.3582 | 0.2558 | | 0.0721 | 6.71 | 3000 | 0.3457 | 0.2471 | | 0.0653 | 7.83 | 3500 | 0.3299 | 0.2357 | | 0.0527 | 8.95 | 4000 | 0.3440 | 0.2334 | | 0.0444 | 10.07 | 4500 | 0.3417 | 0.2289 | | 0.0404 | 11.19 | 5000 | 0.3691 | 0.2204 | | 0.0345 | 12.3 | 5500 | 0.3453 | 0.2102 | | 0.0288 | 13.42 | 6000 | 0.3634 | 0.2089 | | 0.027 | 14.54 | 6500 | 0.3532 | 0.2044 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
Matthijsvanhof/4
Matthijsvanhof
bert
13
15
transformers
0
token-classification
true
false
false
null
null
null
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,415
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 4 This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1243 - Precision: 0.5220 - Recall: 0.6137 - F1: 0.5641 - Accuracy: 0.9630 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 134 | 0.1357 | 0.4549 | 0.5521 | 0.4988 | 0.9574 | | No log | 2.0 | 268 | 0.1243 | 0.5220 | 0.6137 | 0.5641 | 0.9630 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Tokenizers 0.10.3
Matthijsvanhof/bert-base-dutch-cased-finetuned-NER
Matthijsvanhof
bert
16
11
transformers
0
token-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,465
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-dutch-cased-finetuned-NER This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1078 - Precision: 0.6129 - Recall: 0.6639 - F1: 0.6374 - Accuracy: 0.9688 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 267 | 0.1131 | 0.6090 | 0.6264 | 0.6176 | 0.9678 | | 0.1495 | 2.0 | 534 | 0.1078 | 0.6129 | 0.6639 | 0.6374 | 0.9688 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
Matthijsvanhof/bert-base-dutch-cased-finetuned-NER8
Matthijsvanhof
bert
13
21
transformers
0
token-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,450
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-dutch-cased-finetuned-NER8 This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1482 - Precision: 0.4716 - Recall: 0.4359 - F1: 0.4530 - Accuracy: 0.9569 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 68 | 0.1705 | 0.3582 | 0.3488 | 0.3535 | 0.9475 | | No log | 2.0 | 136 | 0.1482 | 0.4716 | 0.4359 | 0.4530 | 0.9569 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Tokenizers 0.10.3
Matthijsvanhof/bert-base-dutch-cased-finetuned-mBERT
Matthijsvanhof
distilbert
13
16
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,463
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-dutch-cased-finetuned-mBERT This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0898 - Precision: 0.7255 - Recall: 0.7255 - F1: 0.7255 - Accuracy: 0.9758 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1603 | 1.0 | 533 | 0.0928 | 0.6896 | 0.6962 | 0.6929 | 0.9742 | | 0.0832 | 2.0 | 1066 | 0.0898 | 0.7255 | 0.7255 | 0.7255 | 0.9758 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Tokenizers 0.10.3
MaxVortman/bert-base-ukr-eng-rus-uncased
MaxVortman
bert
6
14
transformers
0
feature-extraction
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
447
This repository shares smaller version of bert-base-multilingual-uncased that keeps only Ukrainian, English, and Russian tokens in the vocabulary. | Model | Num parameters | Size | | ----------------------------------------- | -------------- | --------- | | bert-base-multilingual-uncased | 167 million | ~650 MB | | MaxVortman/bert-base-ukr-eng-rus-uncased | 110 million | ~423 MB |
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech
MehdiHosseiniMoghadam
wav2vec2
11
98
transformers
0
automatic-speech-recognition
true
false
true
apache-2.0
['cs']
['common_voice']
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
true
true
true
3,401
# wav2vec2-large-xlsr-53-Czech Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "cs", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Czech test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "cs", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 27.047806 % ## Training The Common Voice `train`, `validation` datasets were used for training.
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Dutch
MehdiHosseiniMoghadam
wav2vec2
13
10
transformers
0
automatic-speech-recognition
true
false
true
apache-2.0
['nl']
['common_voice']
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
true
true
true
3,404
# wav2vec2-large-xlsr-53-Dutch Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Dutch using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "nl", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Dutch") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Dutch") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Dutch test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "nl", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Dutch") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Dutch") model.to("cuda") chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 26.494162 % ## Training The Common Voice `train`, `validation` datasets were used for training.
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French
MehdiHosseiniMoghadam
wav2vec2
11
9
transformers
0
automatic-speech-recognition
true
false
true
apache-2.0
['fr']
['common_voice']
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
true
true
true
3,498
# wav2vec2-large-xlsr-53-French Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in French using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "fr", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the French test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "fr", split="test[:10%]") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 34.856015 % ## Training 10% of the Common Voice `train`, `validation` datasets were used for training. ## Testing 10% of the Common Voice `Test` dataset were used for training.
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian
MehdiHosseiniMoghadam
wav2vec2
13
15
transformers
0
automatic-speech-recognition
true
false
true
apache-2.0
['ka']
['common_voice']
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
true
true
true
3,408
# wav2vec2-large-xlsr-53-Georgian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Georgian using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ka", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Georgian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ka", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 60.504024 % ## Training The Common Voice `train`, `validation` datasets were used for training.
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-German
MehdiHosseiniMoghadam
wav2vec2
11
9
transformers
0
automatic-speech-recognition
true
false
true
apache-2.0
['de']
['common_voice']
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
true
true
true
3,496
# wav2vec2-large-xlsr-53-German Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in German using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "de", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-German") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-German") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Czech test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "de", split="test[:15%]") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-German") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-German") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 25.284593 % ## Training 10% of the Common Voice `train`, `validation` datasets were used for training. ## Testing 15% of the Common Voice `Test` dataset were used for training.
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Swedish
MehdiHosseiniMoghadam
wav2vec2
9
15
transformers
0
automatic-speech-recognition
true
false
true
apache-2.0
['sv-SE']
['common_voice']
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
true
true
true
3,347
# wav2vec2-large-xlsr-53-Swedish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Swedish using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Swedish") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Swedish") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Swedish test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "sv-SE", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Swedish") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Swedish") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 41.388337 % ## Training The Common Voice `train`, `validation` datasets were used for training.
Meli/GPT2-Prompt
Meli
gpt2
10
738
transformers
6
text-generation
true
false
true
null
['en']
null
null
0
0
0
0
0
0
0
['gpt2', 'text-generation']
false
true
true
521
# GPT-2 Story Generator ## Model description Generate a short story from an input prompt. Put the vocab ` [endprompt]` after your input. Example of an input: ``` A person with a high school education gets sent back into the 1600s and tries to explain science and technology to the people. [endprompt] ``` #### Limitations and bias The data we used to train was collected from reddit, so it could be very biased towards young, white, male demographic. ## Training data The data was collected from scraping reddit.
MelissaTESSA/distilbert-base-uncased-finetuned-cola
MelissaTESSA
distilbert
13
5
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,572
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6324 - Matthews Correlation: 0.5207 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5245 | 1.0 | 535 | 0.5155 | 0.4181 | | 0.3446 | 2.0 | 1070 | 0.5623 | 0.4777 | | 0.2331 | 3.0 | 1605 | 0.6324 | 0.5207 | | 0.1678 | 4.0 | 2140 | 0.7706 | 0.5106 | | 0.1255 | 5.0 | 2675 | 0.8852 | 0.4998 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
MhF/distilbert-base-uncased-distilled-clinc
MhF
distilbert
10
5
transformers
0
text-classification
true
false
false
apache-2.0
null
['clinc_oos']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,730
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.2663 - Accuracy: 0.9461 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 9 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.1991 | 1.0 | 318 | 3.1495 | 0.7523 | | 2.4112 | 2.0 | 636 | 1.5868 | 0.8510 | | 1.1887 | 3.0 | 954 | 0.7975 | 0.9203 | | 0.5952 | 4.0 | 1272 | 0.4870 | 0.9319 | | 0.3275 | 5.0 | 1590 | 0.3571 | 0.9419 | | 0.2066 | 6.0 | 1908 | 0.3070 | 0.9429 | | 0.1456 | 7.0 | 2226 | 0.2809 | 0.9448 | | 0.1154 | 8.0 | 2544 | 0.2697 | 0.9468 | | 0.1011 | 9.0 | 2862 | 0.2663 | 0.9461 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
MhF/distilbert-base-uncased-finetuned-clinc
MhF
distilbert
12
5
transformers
0
text-classification
true
false
false
apache-2.0
null
['clinc_oos']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,482
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7703 - Accuracy: 0.9187 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2896 | 1.0 | 318 | 3.2887 | 0.7419 | | 2.6309 | 2.0 | 636 | 1.8797 | 0.8310 | | 1.5443 | 3.0 | 954 | 1.1537 | 0.8974 | | 1.0097 | 4.0 | 1272 | 0.8560 | 0.9135 | | 0.7918 | 5.0 | 1590 | 0.7703 | 0.9187 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
MhF/distilbert-base-uncased-finetuned-emotion
MhF
distilbert
14
10
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,345
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2232 - Accuracy: 0.9215 - F1: 0.9218 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8098 | 1.0 | 250 | 0.3138 | 0.9025 | 0.9001 | | 0.2429 | 2.0 | 500 | 0.2232 | 0.9215 | 0.9218 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
MhF/xlm-roberta-base-finetuned-panx-all
MhF
xlm-roberta
9
10
transformers
0
token-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,319
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1753 - F1: 0.8520 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2989 | 1.0 | 835 | 0.1878 | 0.8123 | | 0.1548 | 2.0 | 1670 | 0.1745 | 0.8480 | | 0.1012 | 3.0 | 2505 | 0.1753 | 0.8520 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
MhF/xlm-roberta-base-finetuned-panx-de-fr
MhF
xlm-roberta
9
9
transformers
0
token-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,321
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1576 - F1: 0.8571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2924 | 1.0 | 715 | 0.1819 | 0.8286 | | 0.1503 | 2.0 | 1430 | 0.1580 | 0.8511 | | 0.0972 | 3.0 | 2145 | 0.1576 | 0.8571 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
MhF/xlm-roberta-base-finetuned-panx-de
MhF
xlm-roberta
15
21
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,320
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1354 - F1: 0.8621 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.254 | 1.0 | 525 | 0.1652 | 0.8254 | | 0.1293 | 2.0 | 1050 | 0.1431 | 0.8489 | | 0.0797 | 3.0 | 1575 | 0.1354 | 0.8621 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
MhF/xlm-roberta-base-finetuned-panx-en
MhF
xlm-roberta
9
9
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,320
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3856 - F1: 0.6808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1038 | 1.0 | 50 | 0.5255 | 0.5331 | | 0.4922 | 2.0 | 100 | 0.4377 | 0.6379 | | 0.3664 | 3.0 | 150 | 0.3856 | 0.6808 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
MhF/xlm-roberta-base-finetuned-panx-fr
MhF
xlm-roberta
9
11
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,320
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2736 - F1: 0.8353 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5826 | 1.0 | 191 | 0.3442 | 0.7888 | | 0.2669 | 2.0 | 382 | 0.2848 | 0.8326 | | 0.1818 | 3.0 | 573 | 0.2736 | 0.8353 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
MhF/xlm-roberta-base-finetuned-panx-it
MhF
xlm-roberta
9
9
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,320
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2491 - F1: 0.8213 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8192 | 1.0 | 70 | 0.3300 | 0.7184 | | 0.2949 | 2.0 | 140 | 0.2817 | 0.7959 | | 0.189 | 3.0 | 210 | 0.2491 | 0.8213 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
Michael711/feinschwarz
Michael711
gpt2
15
7
transformers
0
text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer', 'de']
true
true
true
1,091
# feinschwarz This model is a fine-tuned version of [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2). The dataset was compiled from all texts of https://www.feinschwarz.net (as of October 2021). The homepage gathers essayistic texts on theological topics. The model will be used to explore the challenges of text-generating AI for theology with a hands on approach. Can an AI generate theological knowledge? Is a text by Karl Rahner of more value than an AI-generated text? Can we even distinguish a Rahner text from an AI-generated text in the future? And the crucial question: Would it be bad if not? The model is a very first attempt and in its current version certainly not yet a danger for academic theology 🤓 # Using the model You can create text with the model using this code: ```python from transformers import pipeline pipe = pipeline('text-generation', model="Michael711/feinschwarz", tokenizer="Michael711/feinschwarz") text = pipe("Der Sinn des Lebens ist es", max_length=100)[0]["generated_text"] print(text) ``` Have fun theologizing!
Michau/t5-base-en-generate-headline
Michau
t5
9
383,124
transformers
41
text2text-generation
true
true
true
null
null
null
null
0
0
0
0
1
1
0
[]
false
false
true
2,639
## About the model The model has been trained on a collection of 500k articles with headings. Its purpose is to create a one-line heading suitable for the given article. Sample code with a WikiNews article: ```python import torch from transformers import T5ForConditionalGeneration,T5Tokenizer device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = T5ForConditionalGeneration.from_pretrained("Michau/t5-base-en-generate-headline") tokenizer = T5Tokenizer.from_pretrained("Michau/t5-base-en-generate-headline") model = model.to(device) article = ''' Very early yesterday morning, the United States President Donald Trump reported he and his wife First Lady Melania Trump tested positive for COVID-19. Officials said the Trumps' 14-year-old son Barron tested negative as did First Family and Senior Advisors Jared Kushner and Ivanka Trump. Trump took to social media, posting at 12:54 am local time (0454 UTC) on Twitter, "Tonight, [Melania] and I tested positive for COVID-19. We will begin our quarantine and recovery process immediately. We will get through this TOGETHER!" Yesterday afternoon Marine One landed on the White House's South Lawn flying Trump to Walter Reed National Military Medical Center (WRNMMC) in Bethesda, Maryland. Reports said both were showing "mild symptoms". Senior administration officials were tested as people were informed of the positive test. Senior advisor Hope Hicks had tested positive on Thursday. Presidential physician Sean Conley issued a statement saying Trump has been given zinc, vitamin D, Pepcid and a daily Aspirin. Conley also gave a single dose of the experimental polyclonal antibodies drug from Regeneron Pharmaceuticals. According to official statements, Trump, now operating from the WRNMMC, is to continue performing his duties as president during a 14-day quarantine. In the event of Trump becoming incapacitated, Vice President Mike Pence could take over the duties of president via the 25th Amendment of the US Constitution. The Pence family all tested negative as of yesterday and there were no changes regarding Pence's campaign events. ''' text = "headline: " + article max_len = 256 encoding = tokenizer.encode_plus(text, return_tensors = "pt") input_ids = encoding["input_ids"].to(device) attention_masks = encoding["attention_mask"].to(device) beam_outputs = model.generate( input_ids = input_ids, attention_mask = attention_masks, max_length = 64, num_beams = 3, early_stopping = True, ) result = tokenizer.decode(beam_outputs[0]) print(result) ``` Result: ```Trump and First Lady Melania Test Positive for COVID-19```
MilaNLProc/feel-it-italian-emotion
MilaNLProc
camembert
11
40,900
transformers
8
text-classification
true
true
false
null
['it']
null
null
0
0
0
0
0
0
0
['sentiment', 'emotion', 'Italian']
false
true
true
3,540
# FEEL-IT: Emotion and Sentiment Classification for the Italian Language ## FEEL-IT Python Package You can find the package that uses this model for emotion and sentiment classification **[here](https://github.com/MilaNLProc/feel-it)** it is meant to be a very simple interface over HuggingFace models. ## License Users should refer to the [following license](https://developer.twitter.com/en/developer-terms/commercial-terms) ## Abstract Sentiment analysis is a common task to understand people's reactions online. Still, we often need more nuanced information: is the post negative because the user is angry or because they are sad? An abundance of approaches has been introduced for tackling both tasks. However, at least for Italian, they all treat only one of the tasks at a time. We introduce *FEEL-IT*, a novel benchmark corpus of Italian Twitter posts annotated with four basic emotions: **anger, fear, joy, sadness**. By collapsing them, we can also do **sentiment analysis**. We evaluate our corpus on benchmark datasets for both emotion and sentiment classification, obtaining competitive results. We release an [open-source Python library](https://github.com/MilaNLProc/feel-it), so researchers can use a model trained on FEEL-IT for inferring both sentiments and emotions from Italian text. | Model | Download | | ------ | -------------------------| | `feel-it-italian-sentiment` | [Link](https://huggingface.co/MilaNLProc/feel-it-italian-sentiment) | | `feel-it-italian-emotion` | [Link](https://huggingface.co/MilaNLProc/feel-it-italian-emotion) | ## Model The *feel-it-italian-emotion* model performs **emotion classification (joy, fear, anger, sadness)** on Italian. We fine-tuned the [UmBERTo model](https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1) on our new dataset (i.e., FEEL-IT) obtaining state-of-the-art performances on different benchmark corpora. ## Data Our data has been collected by annotating tweets from a broad range of topics. In total, we have 2037 tweets annotated with an emotion label. More details can be found in our paper (https://aclanthology.org/2021.wassa-1.8/). ## Performance We evaluate our performance using [MultiEmotions-It](http://ceur-ws.org/Vol-2769/paper_08.pdf). This dataset differs from FEEL-IT both in terms of topic variety and considered social media (i.e., YouTube and Facebook). We considered only the subset of emotions present in FEEL-IT. To give a point of reference, we also show the Most Frequent Class (MFC) baseline results. The results show that training on FEEL-IT brings stable performance even on datasets from different contexts. | Training Dataset | Macro-F1 | Accuracy | ------ | ------ |------ | | MFC | 0.20 | 0.64 | | FEEL-IT | **0.57** | **0.73** | ## Usage ```python from transformers import pipeline classifier = pipeline("text-classification",model='MilaNLProc/feel-it-italian-emotion',top_k=2) prediction = classifier("Oggi sono proprio contento!") print(prediction) ``` ## Citation Please use the following bibtex entry if you use this model in your project: ``` @inproceedings{bianchi2021feel, title = {{"FEEL-IT: Emotion and Sentiment Classification for the Italian Language"}}, author = "Bianchi, Federico and Nozza, Debora and Hovy, Dirk", booktitle = "Proceedings of the 11th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", year = "2021", publisher = "Association for Computational Linguistics", } ```
MilaNLProc/feel-it-italian-sentiment
MilaNLProc
camembert
11
11,062
transformers
8
text-classification
true
true
false
null
['it']
null
null
0
0
0
0
0
0
0
['sentiment', 'Italian']
false
true
true
3,615
# FEEL-IT: Emotion and Sentiment Classification for the Italian Language ## FEEL-IT Python Package You can find the package that uses this model for emotion and sentiment classification **[here](https://github.com/MilaNLProc/feel-it)** it is meant to be a very simple interface over HuggingFace models. ## License Users should refer to the [following license](https://developer.twitter.com/en/developer-terms/commercial-terms) ## Abstract Sentiment analysis is a common task to understand people's reactions online. Still, we often need more nuanced information: is the post negative because the user is angry or because they are sad? An abundance of approaches has been introduced for tackling both tasks. However, at least for Italian, they all treat only one of the tasks at a time. We introduce *FEEL-IT*, a novel benchmark corpus of Italian Twitter posts annotated with four basic emotions: **anger, fear, joy, sadness**. By collapsing them, we can also do **sentiment analysis**. We evaluate our corpus on benchmark datasets for both emotion and sentiment classification, obtaining competitive results. We release an [open-source Python library](https://github.com/MilaNLProc/feel-it), so researchers can use a model trained on FEEL-IT for inferring both sentiments and emotions from Italian text. | Model | Download | | ------ | -------------------------| | `feel-it-italian-sentiment` | [Link](https://huggingface.co/MilaNLProc/feel-it-italian-sentiment) | | `feel-it-italian-emotion` | [Link](https://huggingface.co/MilaNLProc/feel-it-italian-emotion) | ## Model The *feel-it-italian-sentiment* model performs **sentiment analysis** on Italian. We fine-tuned the [UmBERTo model](https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1) on our new dataset (i.e., FEEL-IT) obtaining state-of-the-art performances on different benchmark corpora. ## Data Our data has been collected by annotating tweets from a broad range of topics. In total, we have 2037 tweets annotated with an emotion label. More details can be found in our paper (https://aclanthology.org/2021.wassa-1.8/). ## Performance We evaluate our performance using [SENTIPOLC16 Evalita](http://www.di.unito.it/~tutreeb/sentipolc-evalita16/). We collapsed the FEEL-IT classes into 2 by mapping joy to the *positive* class and anger, fear and sadness into the *negative* class. We compare three different experimental configurations training on FEEL-IT, SENTIPOLC16, or both by testing on the SENTIPOLC16 test set. The results show that training on FEEL-IT can provide better results on the SENTIPOLC16 test set than those that can be obtained with the SENTIPOLC16 training set. | Training Dataset | Macro-F1 | Accuracy | ------ | ------ |------ | | SENTIPOLC16 | 0.80 | 0.81 | | FEEL-IT | **0.81** | **0.84** | | FEEL-IT+SentiPolc | 0.81 | 0.82 ## Usage ```python from transformers import pipeline classifier = pipeline("text-classification",model='MilaNLProc/feel-it-italian-sentiment',top_k=2) prediction = classifier("Oggi sono proprio contento!") print(prediction) ``` ## Citation Please use the following bibtex entry if you use this model in your project: ``` @inproceedings{bianchi2021feel, title = {{"FEEL-IT: Emotion and Sentiment Classification for the Italian Language"}}, author = "Bianchi, Federico and Nozza, Debora and Hovy, Dirk", booktitle = "Proceedings of the 11th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", year = "2021", publisher = "Association for Computational Linguistics", } ```
Milos/slovak-gpt-j-1.4B
Milos
gptj
6
27
transformers
0
text-generation
true
false
false
gpl-3.0
['sk']
null
null
0
0
0
0
0
0
0
['Slovak GPT-J', 'pytorch', 'causal-lm']
false
true
true
9,097
# Slovak GPT-J-1.4B Slovak GPT-J-1.4B with the whopping `1,415,283,792` parameters is the latest and the largest model released in Slovak GPT-J series. Smaller variants, [Slovak GPT-J-405M](https://huggingface.co/Milos/slovak-gpt-j-405M) and [Slovak GPT-J-162M](https://huggingface.co/Milos/slovak-gpt-j-162M), are still available. ## Model Description Model is based on [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax/) and has over 1.4B trainable parameters. <figure> | Hyperparameter | Value | |----------------------|----------------------------------------------------------------------------------------------------------------------------------------| | \\(n_{parameters}\\) | 1,415,283,792 | | \\(n_{layers}\\) | 24 | | \\(d_{model}\\) | 2048 | | \\(d_{ff}\\) | 16384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50256 (same tokenizer as GPT-2/3&dagger;) | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | <p><strong>&dagger;</strong> ByteLevelBPETokenizer was trained on the same Slovak corpus.</p></figure> ## Training data Slovak GPT-J models were trained on a privately collected dataset consisting of predominantly Slovak text spanning different categories, e.g. web, news articles or even biblical texts - in total, over 40GB of text data was used to train this model. The dataset was preprocessed and cleaned in a specific way that involves minor but a few caveats, so in order to achieve the expected performance, feel free to refer to [How to use] section. Please, keep in mind that despite the effort to remove inappropriate corpus, the model still might generate sensitive content or leak sensitive information. ## Training procedure This model was trained for a bit more than 26.5 billion tokens over 48,001 steps on TPU v3-8 pod. The cross-entropy validation loss at the last step was `2.657`. ## Intended Use Same as the original GPT-J, Slovak GPT-J learns an inner representation of the language that can be used to extract features useful for downstream tasks, however, the intended use is text generation from a prompt. ### How to use This model along with the tokenizer can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Milos/slovak-gpt-j-1.4B") model = AutoModelForCausalLM.from_pretrained("Milos/slovak-gpt-j-1.4B") ``` When generating a prompt keep in mind these three things, and you should be good to go: 1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes "Mám rád slovenčinu" (no space after `slovenčinu`) and "Mám rád slovenčinu " (trailing space after `slovenčinu`), i.e `[12805, 2872, 46878]` != `[12805, 2872, 46878, 221]`. 2. Always use good ol' US English primary double quotation marks, i.e. `""` instead of `„“`. 3. In case of a new line always enter `\n\n` instead of a single `\n` To illustrate an example of a basic text generation: ``` >>> prompt = "Tradičné jedlo na Orave sú" >>> encoded_input = tokenizer(prompt, return_tensors='pt') >>> output = model.generate(**encoded_input) >>> tokenizer.decode(output[0]) 'Tradičné jedlo na Orave sú bryndzové halušky\n\nNa Orave sa v minulosti varilo viac druhov' ``` ### Capabilities, Limitations, and Biases Based on the parameter count and our internal testing this should be by far the most capable model out of all models released in Slovak GPT-J series. For sake of simplicity, I have omitted all the boilerplate code and swapped `\n` for new lines in the examples that follow. These were generated with both `top_p` and `temperature` set to `0.9`. A spin on "Mor ho!" by Samo Chalpuka ``` PROMPT Zleteli orly z Tatry, tiahnu na podolia, ponad vysoké hory, ponad rovné polia; preleteli cez Dunaj, cez tú šíru vodu, sadli tam za pomedzím slovenského rodu. Duní Dunaj a luna za lunou sa valí OUTPUT Zleteli orly z Tatry, tiahnu na podolia, ponad vysoké hory, ponad rovné polia; preleteli cez Dunaj, cez tú šíru vodu, sadli tam za pomedzím slovenského rodu. Duní Dunaj a luna za lunou sa valí: nad ním svieti pevný hrad na vysokom bralí. Pod tým hradom je zem slovenská, na nej je Slovensko naše. Na tom vrchu leží slovenský národ, pod tým bralom leží krajina slovenská. A pod tým hradom leží i jeho sláva: na tom vrchu, na tom brale leží sláva slovenského národa.``` ``` Jokes are a bit hit or miss. Very often they contain vulgarisms and/or are highly politically incorrect. This one is rather exception than the rule. ``` PROMPT Nasledujú najlepšie vtipy. Prečo OUTPUT Nasledujú najlepšie vtipy. Prečo má policajt pri sebe vždy dva nože? Jeden je na sebaobranu, druhý na boj zblízka. ``` The infamous unicorn prompt from GPT-2 paper. ``` PROMPT V šokujúcom náleze vedci objavili stádo jednorožcov žijúcich v odľahlom, predtým nepreskúmanom údolí, v Andách. Ešte prekvapivejšie pre vedcov bol fakt, že jednorožce hovorili plynule po slovensky. OUTPUT V šokujúcom náleze vedci objavili stádo jednorožcov žijúcich v odľahlom, predtým nepreskúmanom údolí, v Andách. Ešte prekvapivejšie pre vedcov bol fakt, že jednorožce hovorili plynule po slovensky. V Andách ich totiž žilo približne 700 jedincov, ktoré sa živili výhradne materským mliekom. Slováci sú tak pravdepodobne prvými Európanmi, ktorí v Andách stretli jednorožca. "Je to dôkaz, že tieto zvieratá sú inteligentné a že žijú v Andách už stovky rokov," povedal pre webový portál televízie JOJ profesor geológie, geografie a zoológie, Milan Kováč. Podľa profesora Kováča si v Andách zvieratá vytvárajú svoj vlastný jazyk. Je to zároveň dôkaz, že jednorožce žili v minulosti aj v slovenských pohoriach. "Jednorožce sa tam síce vyskytovali, ale neboli tak dobre preskúmané, ako teraz v Andách." Na Slovensku však ľudia o jednorožcoch donedávna vedeli veľmi málo.<|endoftext|> ``` Since the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech: ``` >>> prompt = "Věta nesmí být sprostá a musí být zcela" >>> encoded_input = tokenizer(prompt, return_tensors='pt') >>> output = model.generate(**encoded_input, max_length=16) >>> tokenizer.decode(output[0]) 'Věta nesmí být sprostá a musí být zcela pravdivá.' ``` ## Citation and Related Information This was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free time to open source it properly, so it all sat on my hard drive until now :) If you use this model or have any questions about it feel free to hit me up at [twitter](https://twitter.com/miloskondela) or check out my [github](https://github.com/kondela) profile. ### BibTeX entry To cite this model: ```bibtex @misc{slovak-gpt-j-1.4B, author = {Kondela, Milos}, title = {{Slovak GPT-J-1.4B}}, howpublished = {\url{https://huggingface.co/Milos/slovak-gpt-j-1.4B}}, year = 2022, month = February } ``` To cite the codebase that trained this model: ```bibtex @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` ## Acknowledgements This project was generously supported by [TPU Research Cloud (TRC) program](https://sites.research.google/trc/about/). Shoutout also goes to [Ben Wang](https://github.com/kingoflolz) and great [EleutherAI community](https://www.eleuther.ai/).
Milos/slovak-gpt-j-162M
Milos
gptj
6
61
transformers
0
text-generation
true
false
false
gpl-3.0
['sk']
null
null
0
0
0
0
0
0
0
['Slovak GPT-J', 'pytorch', 'causal-lm']
false
true
true
6,921
# Slovak GPT-J-162M Slovak GPT-J-162M is the first model released in Slovak GPT-J series and the very first publicly available transformer trained predominantly on Slovak corpus. Since the initial release two other models were made public, [Slovak GPT-J-405M](https://huggingface.co/Milos/slovak-gpt-j-405M) and the largest [Slovak GPT-J-1.4B](https://huggingface.co/Milos/slovak-gpt-j-1.4B). ## Model Description Model is based on [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax/) and has over 162M trainable parameters. <figure> | Hyperparameter | Value | |----------------------|-------------------------------------------------------------------------------------------------------------------------------| | \\(n_{parameters}\\) | 162,454,608 | | \\(n_{layers}\\) | 12 | | \\(d_{model}\\) | 768 | | \\(d_{ff}\\) | 16384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50256 (same tokenizer as GPT-2/3&dagger;) | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | <p><strong>&dagger;</strong> ByteLevelBPETokenizer was trained on the same Slovak corpus.</p></figure> ## Training data Slovak GPT-J-162M was trained on a privately collected dataset consisting of predominantly Slovak text spanning different categories, e.g. web, news articles or even biblical texts - in total, over 40GB of text data was used to train this model. The dataset was preprocessed and cleaned in a specific way that involves minor but a few caveats, so in order to achieve the expected performance, feel free to refer to [How to use] section. Please, keep in mind that despite the effort to remove inappropriate parts of the corpus, the model still might generate sensitive content or leak sensitive information. ## Training procedure This model was trained for almost 37 billion tokens over 69,001 steps on TPU v3-8 pod. The cross-entropy validation loss at the last step was 3.065. ## Intended Use Same as the original GPT-J, Slovak GPT-J learns an inner representation of the language that can be used to extract features useful for downstream tasks, however, the intended use is text generation from a prompt. ### How to use This model along with the tokenizer can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Milos/slovak-gpt-j-162M") model = AutoModelForCausalLM.from_pretrained("Milos/slovak-gpt-j-162M") ``` When generating a prompt keep in mind these three things, and you should be good to go: 1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes "Mám rád slovenčinu" (no space after `slovenčinu`) and "Mám rád slovenčinu " (trailing space after `slovenčinu`), i.e `[12805, 2872, 46878]` != `[12805, 2872, 46878, 221]`. 2. Always use good ol' US English primary double quotation marks, i.e. `""` instead of `„“`. 3. In case of a new line always enter `\n\n` instead of a single `\n` To illustrate an example of a basic text generation: ``` >>> prompt = "Moje najobľubenejšie mesto na severe Slovenska je" >>> encoded_input = tokenizer(prompt, return_tensors='pt') >>> output = model.generate(**encoded_input) >>> tokenizer.decode(output[0]) 'Moje najobľubenejšie mesto na severe Slovenska je Žilina.\n\nV Žiline sa nachádza množstvo zaujímavých miest' ``` ### Capabilities, Limitations, and Biases First and foremost, the capability of this particular model is very limited due to its relatively small size totalling only 162M parameters, hence the intended use of this particular model is to educate and have fun! :) Since the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech: ``` >>> prompt = "Věta nesmí být sprostá a musí být zcela" >>> encoded_input = tokenizer(prompt, return_tensors='pt') >>> output = model.generate(**encoded_input, max_length=16) >>> tokenizer.decode(output[0]) 'Věta nesmí být sprostá a musí být zcela věrná.' ``` ## Citation and Related Information This was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free time to open source it properly, so it all sat on my hard drive until now. Based on the popularity and interest in this model I might release _substantially_ larger versions of Slovak GPT-J models that are way more capable. If you use this model or have any questions about it feel free to hit me up at [twitter](https://twitter.com/miloskondela) or check out my [github](https://github.com/kondela) profile. ### BibTeX entry To cite this model: ```bibtex @misc{slovak-gpt-j-162m, author = {Kondela, Milos}, title = {{Slovak GPT-J-162M}}, howpublished = {\url{https://huggingface.co/Milos/slovak-gpt-j-162M}}, year = 2022, month = February } ``` To cite the codebase that trained this model: ```bibtex @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` ## Acknowledgements This project was generously supported by [TPU Research Cloud (TRC) program](https://sites.research.google/trc/about/). Shoutout also goes to [Ben Wang](https://github.com/kingoflolz) and great [EleutherAI community](https://www.eleuther.ai/).
Milos/slovak-gpt-j-405M
Milos
gptj
6
6
transformers
0
text-generation
true
false
false
gpl-3.0
['sk']
null
null
0
0
0
0
0
0
0
['Slovak GPT-J', 'pytorch', 'causal-lm']
false
true
true
8,206
# Slovak GPT-J-405M Slovak GPT-J-405M is the second model released in Slovak GPT-J series after its smaller variant [Slovak GPT-J-162M](https://huggingface.co/Milos/slovak-gpt-j-162M). Since then a larger [Slovak GPT-J-1.4B](https://huggingface.co/Milos/slovak-gpt-j-1.4B) was released. ## Model Description Model is based on [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax/) and has over 405M trainable parameters. <figure> | Hyperparameter | Value | |----------------------|----------------------------------------------------------------------------------------------------------------------------------------| | \\(n_{parameters}\\) | 405,677,136 | | \\(n_{layers}\\) | 24 | | \\(d_{model}\\) | 1024 | | \\(d_{ff}\\) | 16384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50256 (same tokenizer as GPT-2/3&dagger;) | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | <p><strong>&dagger;</strong> ByteLevelBPETokenizer was trained on the same Slovak corpus.</p></figure> ## Training data Slovak GPT-J models were trained on a privately collected dataset consisting of predominantly Slovak text spanning different categories, e.g. web, news articles or even biblical texts - in total, over 40GB of text data was used to train this model. The dataset was preprocessed and cleaned in a specific way that involves minor but a few caveats, so in order to achieve the expected performance, feel free to refer to [How to use] section. Please, keep in mind that despite the effort to remove inappropriate corpus, the model still might generate sensitive content or leak sensitive information. ## Training procedure This model was trained for a bit more than 36.5 billion tokens over 69,001 steps on TPU v3-8 pod. The cross-entropy validation loss at the last step was `2.821`. ## Intended Use Same as the original GPT-J, Slovak GPT-J learns an inner representation of the language that can be used to extract features useful for downstream tasks, however, the intended use is text generation from a prompt. ### How to use This model along with the tokenizer can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Milos/slovak-gpt-j-405M") model = AutoModelForCausalLM.from_pretrained("Milos/slovak-gpt-j-405M") ``` When generating a prompt keep in mind these three things, and you should be good to go: 1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes "Mám rád slovenčinu" (no space after `slovenčinu`) and "Mám rád slovenčinu " (trailing space after `slovenčinu`), i.e `[12805, 2872, 46878]` != `[12805, 2872, 46878, 221]`. 2. Always use good ol' US English primary double quotation marks, i.e. `""` instead of `„“`. 3. In case of a new line always enter `\n\n` instead of a single `\n` To illustrate an example of a basic text generation: ``` >>> prompt = "Tradičné jedlo na Orave sú" >>> encoded_input = tokenizer(prompt, return_tensors='pt') >>> output = model.generate(**encoded_input) >>> tokenizer.decode(output[0]) 'Tradičné jedlo na Orave sú bryndzové halušky\n\nNa Orave sa v minulosti varilo viac druhov' ``` ### Capabilities, Limitations, and Biases The capability of this particular model is somewhat decent despite its small size totalling 405M parameters. With relative ease it can manage to generate interesting and grammatically correct content. For example, you can try few of the following prompts. (For sake of simplicity, I have omitted all the boilerplate code and swapped `\n` for new lines). Try generating "How to" articles: ``` PROMPT Ako napísať pôsobivú esej krok za krokom OUTPUT Ako napísať pôsobivú esej krok za krokom V tomto článku sa dozviete, ako napísať esej, ktorá bude mať v sebe niečo, čo zaujme a bude sa vám páčiť. V prvom rade si musíte uvedomiť, že esej je vlastne písaný text, ktorý má byť napísaný na počítači.' ``` However, relying on the model to produce factually correct information isn't recommended. Or try to generate chat conversations: ``` PROMPT Peter: Ako sa mas? Lucia: Ale celkom dobre. Co si robil? Peter: OUTPUT Peter: Ako sa mas? Lucia: Ale celkom dobre. Co si robil? Peter: No, bol som na chate. Lucia: A co si tam robil? Peter: No, bol som tam s kamošmi. Lucia: A co si tam robil? Peter: No, bol som tam s kamošmi. ``` Apparently either Peters are more likely to act suspiciously in this particular context or there's a problem with the model. Let's assume that the second explanation may hold some merit. In general, GPT models can (and often will) get into a repeating cycle of generating same content. This is a common problem beyond the scope of this README, however, see [generate's documentation](https://huggingface.co/docs/transformers/master/en/main_classes/model#transformers.generation_utils.GenerationMixin.generate) on how to introduce a frequency/repetition penalty. Since the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech: ``` >>> prompt = "Věta nesmí být sprostá a musí být zcela" >>> encoded_input = tokenizer(prompt, return_tensors='pt') >>> output = model.generate(**encoded_input, max_length=16) >>> tokenizer.decode(output[0]) 'Věta nesmí být sprostá a musí být zcela pravdivá.' ``` ## Citation and Related Information This was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free time to open source it properly, so it all sat on my hard drive until now :) If you use this model or have any questions about it feel free to hit me up at [twitter](https://twitter.com/miloskondela) or check out my [github](https://github.com/kondela) profile. ### BibTeX entry To cite this model: ```bibtex @misc{slovak-gpt-j-405m, author = {Kondela, Milos}, title = {{Slovak GPT-J-405M}}, howpublished = {\url{https://huggingface.co/Milos/slovak-gpt-j-405M}}, year = 2022, month = February } ``` To cite the codebase that trained this model: ```bibtex @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` ## Acknowledgements This project was generously supported by [TPU Research Cloud (TRC) program](https://sites.research.google/trc/about/). Shoutout also goes to [Ben Wang](https://github.com/kingoflolz) and great [EleutherAI community](https://www.eleuther.ai/).
MingZhong/DialogLED-base-16384
MingZhong
led
22
9,638
transformers
3
text2text-generation
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
609
[DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization](https://arxiv.org/abs/2109.02492). ## Introduction DialogLED is a pre-trained model for long dialogue understanding and summarization. It builds on the Longformer-Encoder-Decoder (LED) architecture and uses window-based denoising as the pre-training task on a large amount of long dialogue data for further training. Here is a base version of DialogLED, the input length is limited to 16,384 in the pre-training phase. ## Finetuning for Downstream Tasks Please refer to [our GitHub page](https://github.com/microsoft/DialogLM).
MingZhong/DialogLED-large-5120
MingZhong
led
22
1,475
transformers
4
text2text-generation
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
609
[DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization](https://arxiv.org/abs/2109.02492). ## Introduction DialogLED is a pre-trained model for long dialogue understanding and summarization. It builds on the Longformer-Encoder-Decoder (LED) architecture and uses window-based denoising as the pre-training task on a large amount of long dialogue data for further training. Here is a large version of DialogLED, the input length is limited to 5,120 in the pre-training phase. ## Finetuning for Downstream Tasks Please refer to [our GitHub page](https://github.com/microsoft/DialogLM).
Mingyi/classify_title_subject
Mingyi
bert
8
3
transformers
1
text-classification
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
4,464
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tmp6tsjsfbf This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0178 - Train Sparse Categorical Accuracy: 0.9962 - Epoch: 49 ## Model description This model classifies the title of a content (e.g., YouTube video, article, or podcast episode) into 1 of 8 subjects 0. art 1. personal development 2. world 3. health 4. science 5. business 6. humanities 7. technology. This model is used to support [Sanderling](https://sanderling.app) ## Intended uses & limitations More information needed ## Training and evaluation data We used 1.5k labeled titles to train the model. Majority of the training dataset are English titles. The rest are Chinese titles. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:-----:| | 1.8005 | 0.3956 | 0 | | 1.3302 | 0.5916 | 1 | | 0.8998 | 0.7575 | 2 | | 0.6268 | 0.8468 | 3 | | 0.4239 | 0.9062 | 4 | | 0.2982 | 0.9414 | 5 | | 0.2245 | 0.9625 | 6 | | 0.1678 | 0.9730 | 7 | | 0.1399 | 0.9745 | 8 | | 0.1059 | 0.9827 | 9 | | 0.0822 | 0.9850 | 10 | | 0.0601 | 0.9902 | 11 | | 0.0481 | 0.9932 | 12 | | 0.0386 | 0.9955 | 13 | | 0.0292 | 0.9977 | 14 | | 0.0353 | 0.9940 | 15 | | 0.0336 | 0.9932 | 16 | | 0.0345 | 0.9910 | 17 | | 0.0179 | 0.9985 | 18 | | 0.0150 | 0.9985 | 19 | | 0.0365 | 0.9895 | 20 | | 0.0431 | 0.9895 | 21 | | 0.0243 | 0.9955 | 22 | | 0.0317 | 0.9925 | 23 | | 0.0375 | 0.9902 | 24 | | 0.0138 | 0.9970 | 25 | | 0.0159 | 0.9977 | 26 | | 0.0160 | 0.9962 | 27 | | 0.0151 | 0.9977 | 28 | | 0.0337 | 0.9902 | 29 | | 0.0119 | 0.9977 | 30 | | 0.0165 | 0.9955 | 31 | | 0.0133 | 0.9977 | 32 | | 0.0047 | 1.0 | 33 | | 0.0037 | 1.0 | 34 | | 0.0033 | 1.0 | 35 | | 0.0031 | 1.0 | 36 | | 0.0036 | 1.0 | 37 | | 0.0343 | 0.9887 | 38 | | 0.0234 | 0.9962 | 39 | | 0.0034 | 1.0 | 40 | | 0.0036 | 1.0 | 41 | | 0.0261 | 0.9917 | 42 | | 0.0111 | 0.9970 | 43 | | 0.0039 | 1.0 | 44 | | 0.0214 | 0.9932 | 45 | | 0.0044 | 0.9985 | 46 | | 0.0122 | 0.9985 | 47 | | 0.0119 | 0.9962 | 48 | | 0.0178 | 0.9962 | 49 | ### Framework versions - Transformers 4.15.0 - TensorFlow 2.7.0 - Tokenizers 0.10.3
Minowa/distilbert-base-uncased-finetuned-ner
Minowa
distilbert
13
5
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,556
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0596 - Precision: 0.9240 - Recall: 0.9378 - F1: 0.9308 - Accuracy: 0.9838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2381 | 1.0 | 878 | 0.0707 | 0.9100 | 0.9240 | 0.9170 | 0.9805 | | 0.0563 | 2.0 | 1756 | 0.0583 | 0.9246 | 0.9382 | 0.9314 | 0.9835 | | 0.03 | 3.0 | 2634 | 0.0596 | 0.9240 | 0.9378 | 0.9308 | 0.9838 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
Mirelle/t5-small-finetuned-ro-to-en
Mirelle
t5
12
3
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['wmt16']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,570
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-ro-to-en This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.5877 - Bleu: 13.4499 - Gen Len: 17.5073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 1.6167 | 0.05 | 2000 | 1.8649 | 9.7029 | 17.5753 | | 1.4551 | 0.1 | 4000 | 1.7810 | 10.6382 | 17.5358 | | 1.3723 | 0.16 | 6000 | 1.7369 | 11.1285 | 17.5158 | | 1.3373 | 0.21 | 8000 | 1.7086 | 11.6173 | 17.5013 | | 1.2935 | 0.26 | 10000 | 1.6890 | 12.0641 | 17.5038 | | 1.2632 | 0.31 | 12000 | 1.6670 | 12.3012 | 17.5253 | | 1.2463 | 0.37 | 14000 | 1.6556 | 12.3991 | 17.5153 | | 1.2272 | 0.42 | 16000 | 1.6442 | 12.7392 | 17.4732 | | 1.2052 | 0.47 | 18000 | 1.6328 | 12.8446 | 17.5143 | | 1.1985 | 0.52 | 20000 | 1.6233 | 13.0892 | 17.4807 | | 1.1821 | 0.58 | 22000 | 1.6153 | 13.1529 | 17.4952 | | 1.1791 | 0.63 | 24000 | 1.6079 | 13.2964 | 17.5088 | | 1.1698 | 0.68 | 26000 | 1.6038 | 13.3548 | 17.4842 | | 1.154 | 0.73 | 28000 | 1.5957 | 13.3012 | 17.5053 | | 1.1634 | 0.79 | 30000 | 1.5931 | 13.4203 | 17.5083 | | 1.1487 | 0.84 | 32000 | 1.5893 | 13.3959 | 17.5123 | | 1.1495 | 0.89 | 34000 | 1.5875 | 13.3745 | 17.4902 | | 1.1458 | 0.94 | 36000 | 1.5877 | 13.4129 | 17.5043 | | 1.1465 | 1.0 | 38000 | 1.5877 | 13.4499 | 17.5073 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
Mirjam/test-finetuned
Mirjam
t5
25
3
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,290
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-finetuned This model is a fine-tuned version of [yhavinga/t5-v1.1-base-dutch-cnn-test](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cnn-test) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 1 | nan | 33.8462 | 31.746 | 30.7692 | 30.7692 | 86.0 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1 - Datasets 1.15.1 - Tokenizers 0.10.3
MisbaHF/distilbert-base-uncased-finetuned-cola
MisbaHF
distilbert
13
3
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,572
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7134 - Matthews Correlation: 0.5411 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5294 | 1.0 | 535 | 0.5082 | 0.4183 | | 0.3483 | 2.0 | 1070 | 0.4969 | 0.5259 | | 0.2355 | 3.0 | 1605 | 0.6260 | 0.5065 | | 0.1733 | 4.0 | 2140 | 0.7134 | 0.5411 | | 0.1238 | 5.0 | 2675 | 0.8516 | 0.5291 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
MistahCase/distilroberta-base-testingSB-testingSB
MistahCase
roberta
17
4
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,309
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-testingSB-testingSB This model is a fine-tuned version of [MistahCase/distilroberta-base-testingSB](https://huggingface.co/MistahCase/distilroberta-base-testingSB) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9870 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1463 | 1.0 | 1461 | 1.1171 | | 1.0188 | 2.0 | 2922 | 1.0221 | | 1.0016 | 3.0 | 4383 | 0.9870 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
MistahCase/distilroberta-base-testingSB
MistahCase
roberta
13
4
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,470
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-testingSB This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on a company specific, Danish dataset. It achieves the following results on the evaluation set: - Loss: 1.0403 ## Model description Customer-specific model used to embed asset management work orders in Danish ## Intended uses & limitations Customer-specific and trained for unsupervised categorization tasks ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results Epoch Training Loss Validation Loss 1 0.988500 1.056376 2 0.996300 1.027803 3 0.990300 1.040270 | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.98850 | 1.0 | 1461 | 1.5211 | | 1.3179 | 2.0 | 2922 | 1.3314 | | 1.1931 | 3.0 | 4383 | 1.2530 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
Modfiededition/t5-base-fine-tuned-on-jfleg
Modfiededition
t5
8
18
transformers
7
text2text-generation
false
true
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
1,393
## t5-base-fine-tuned-on-jfleg T5-base model fine-tuned on the [**JFLEG dataset**](https://huggingface.co/datasets/jfleg) with the objective of **text2text-generation**. # Model Description: T5 is an encoder-decoder model pre-trained with a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. .T5 works well on a variety of tasks out-of-the-box by prepending a different prefix to the input corresponding to each task, e.g., for translation: translate English to German: …, for summarization: summarize: …. The T5 model was presented in [**Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer**](https://arxiv.org/pdf/1910.10683.pdf) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. ## Pre-Processing: For this task of grammar correction, we’ll use the prefix “grammar: “ to each of the input sentences. ``` Grammar: Your Sentence ``` ## How to use : You can use this model directly with the pipeline for detecting and correcting grammatical mistakes. ``` from transformers import pipeline model_checkpoint = "Modfiededition/t5-base-fine-tuned-on-jfleg" model = pipeline("text2text-generation", model=model_checkpoint) text = "I am write on AI" output = model(text) ``` Result(s) ``` I am writing on AI. ```
Mofe/speech-sprint-test
Mofe
wav2vec2
18
8
transformers
0
automatic-speech-recognition
true
false
false
null
['ab']
['common_voice']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer']
true
true
true
1,082
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: 207.6065 - Wer: 1.5484 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4.dev0 - Tokenizers 0.11.0
Mofe/xls-r-hausa-40
Mofe
wav2vec2
24
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ha']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
true
true
true
1,853
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HA dataset. It achieves the following results on the evaluation set: - Loss: 0.4998 - Wer: 0.5153 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 80.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0021 | 8.33 | 500 | 2.9059 | 1.0 | | 2.6604 | 16.66 | 1000 | 2.6402 | 0.9892 | | 1.2216 | 24.99 | 1500 | 0.6051 | 0.6851 | | 1.0754 | 33.33 | 2000 | 0.5408 | 0.6464 | | 0.9582 | 41.66 | 2500 | 0.5521 | 0.5935 | | 0.8653 | 49.99 | 3000 | 0.5156 | 0.5550 | | 0.7867 | 58.33 | 3500 | 0.5439 | 0.5606 | | 0.7265 | 66.66 | 4000 | 0.4863 | 0.5255 | | 0.6699 | 74.99 | 4500 | 0.5050 | 0.5169 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4.dev0 - Tokenizers 0.11.0
MohaAM/en_pipeline
MohaAM
null
22
6
spacy
0
token-classification
false
false
false
null
['en']
null
null
0
0
0
0
0
0
0
['spacy', 'token-classification']
false
true
true
1,876
| Feature | Description | | --- | --- | | **Name** | `en_pipeline` | | **Version** | `0.0.0` | | **spaCy** | `>=3.1.0,<3.2.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `ner`, `attribute_ruler`, `lemmatizer` | | **Components** | `tok2vec`, `tagger`, `parser`, `ner`, `attribute_ruler`, `lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (114 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` | | **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` | | **`ner`** | `ARC`, `AST`, `BOOK`, `CAUSAL`, `COMPARISON`, `DATE`, `HEM`, `HOUR`, `HYPO`, `INSTRUMENT`, `JUDGEMENT`, `LAWS`, `MODEL`, `NAME`, `Observation`, `PAR`, `PLACE`, `QUANTITY`, `REASON`, `ZOD` | </details> ### Accuracy | Type | Score | | --- | --- | | `TAG_ACC` | 0.00 | | `DEP_UAS` | 0.00 | | `DEP_LAS` | 0.00 | | `DEP_LAS_PER_TYPE` | 0.00 | | `SENTS_P` | 100.00 | | `SENTS_R` | 100.00 | | `SENTS_F` | 100.00 | | `ENTS_F` | 99.32 | | `ENTS_P` | 99.47 | | `ENTS_R` | 99.17 | | `LEMMA_ACC` | 0.00 | | `NER_LOSS` | 7790.09 |
MohammadABH/bertweet-finetuned-rbam
MohammadABH
roberta
11
4
transformers
0
text-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,433
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bertweet-finetuned-rbam This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3971 - F1: 0.6620 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7138 | 1.0 | 1632 | 0.7529 | 0.6814 | | 0.5692 | 2.0 | 3264 | 0.8473 | 0.6803 | | 0.4126 | 3.0 | 4896 | 1.0029 | 0.6617 | | 0.2854 | 4.0 | 6528 | 1.2167 | 0.6635 | | 0.2007 | 5.0 | 8160 | 1.3971 | 0.6620 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
MohammadABH/twitter-roberta-base-dec2021_rbam_fine_tuned
MohammadABH
roberta
11
3
transformers
0
text-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,495
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter-roberta-base-dec2021_rbam_fine_tuned This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-dec2021](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8295 - Accuracy: 0.6777 - Precision: 0.6743 - Recall: 0.6777 - F1: 0.6753 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.8455 | 1.0 | 3264 | 0.7663 | 0.6661 | 0.6802 | 0.6661 | 0.6693 | | 0.6421 | 2.0 | 6528 | 0.8295 | 0.6777 | 0.6743 | 0.6777 | 0.6753 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
Monsia/autonlp-tweets-classification-23044997
Monsia
distilbert
9
3
transformers
0
text-classification
true
false
false
null
['en']
['Monsia/autonlp-data-tweets-classification']
4.819872182577655
0
0
0
0
0
0
0
autonlp
false
true
true
1,250
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 23044997 - CO2 Emissions (in grams): 4.819872182577655 ## Validation Metrics - Loss: 0.001594889909029007 - Accuracy: 0.9997478885667465 - Macro F1: 0.9991190902836993 - Micro F1: 0.9997478885667465 - Weighted F1: 0.9997476735518704 - Macro Precision: 0.9998014460161265 - Micro Precision: 0.9997478885667465 - Weighted Precision: 0.9997479944069787 - Macro Recall: 0.9984426545713851 - Micro Recall: 0.9997478885667465 - Weighted Recall: 0.9997478885667465 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Monsia/autonlp-tweets-classification-23044997 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Monsia/autonlp-tweets-classification-23044997", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Monsia/autonlp-tweets-classification-23044997", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
Monsia/camembert-fr-covid-tweet-classification
Monsia
camembert
7
5
transformers
0
text-classification
true
false
false
apache-2.0
['fr']
null
null
0
0
0
0
0
0
0
['classification']
false
true
true
1,326
# camembert-fr-covid-tweet-classification This model is a fine-tune checkpoint of [Yanzhu/bertweetfr-base](https://huggingface.co/Yanzhu/bertweetfr-base), fine-tuned on SST-2. This model reaches an accuracy of 66.00% on the dev set. In this dataset, given a tweet, the goal was to infer the underlying topic of the tweet by choosing from four topics classes: - chiffres : this means, the tweet talk about statistics of covid. - mesures : this means, the tweet talk about measures take by government of covid - opinions : this means, the tweet talk about opinion of people like fake new. - symptomes : this means, the tweet talk about symptoms or variant of covid. - divers : or other # Pipelining the Model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("Monsia/camembert-fr-covid-tweet-classification") model = AutoModelForSequenceClassification.from_pretrained("Monsia/camembert-fr-covid-tweet-classification") nlp_topic_classif = transformers.pipeline('topics-classification', model = model, tokenizer = tokenizer) nlp_topic_classif("tchai on est morts. on va se faire vacciner et ils vont contrôler comme les marionnettes avec des fils. d'après les '' ont dit ''...") # Output: [{'label': 'opinions', 'score': 0.831] ```
Monsia/camembert-fr-covid-tweet-sentiment-classification
Monsia
camembert
7
176
transformers
0
text-classification
true
false
false
apache-2.0
['fr']
null
null
0
0
0
0
0
0
0
['classification']
false
true
true
1,062
# camembert-fr-covid-tweet-sentiment-classification This model is a fine-tune checkpoint of [Yanzhu/bertweetfr-base](https://huggingface.co/Yanzhu/bertweetfr-base), fine-tuned on SST-2. This model reaches an accuracy of 71% on the dev set. In this dataset, given a tweet, the goal was to infer the underlying topic of the tweet by choosing from four topics classes: - 0 : negatif - 1 : neutre - 2 : positif # Pipelining the Model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("Monsia/camembert-fr-covid-tweet-sentiment-classification") model = AutoModelForSequenceClassification.from_pretrained("Monsia/camembert-fr-covid-tweet-sentiment-classification") nlp_topic_classif = transformers.pipeline('topics-classification', model = model, tokenizer = tokenizer) nlp_topic_classif("tchai on est morts. on va se faire vacciner et ils vont contrôler comme les marionnettes avec des fils. d'après les '' ont dit ''...") # Output: [{'label': 'opinions', 'score': 0.831] ```
Monsia/test-model-lg-data
Monsia
wav2vec2
21
10
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,703
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-model-lg-data This model is a fine-tuned version of [Monsia/test-model-lg-data](https://huggingface.co/Monsia/test-model-lg-data) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3354 - Wer: 0.4150 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0236 | 0.67 | 100 | 0.4048 | 0.4222 | | 0.0304 | 1.35 | 200 | 0.4266 | 0.4809 | | 0.0545 | 2.03 | 300 | 0.4309 | 0.4735 | | 0.0415 | 2.7 | 400 | 0.4269 | 0.4595 | | 0.033 | 3.38 | 500 | 0.4085 | 0.4537 | | 0.0328 | 4.05 | 600 | 0.3642 | 0.4224 | | 0.0414 | 4.73 | 700 | 0.3354 | 0.4150 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.13.3 - Tokenizers 0.10.3
Mood/distilbert-base-uncased-finetuned-ner
Mood
distilbert
15
5
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
950
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli
MoritzLaurer
deberta-v2
8
31,156
transformers
15
zero-shot-classification
true
false
false
mit
['en']
['multi_nli', 'anli', 'fever']
null
8
2
6
0
0
0
0
['text-classification', 'zero-shot-classification']
true
true
true
6,834
# DeBERTa-v3-base-mnli-fever-anli ## Model description This model was trained on the MultiNLI, Fever-NLI and Adversarial-NLI (ANLI) datasets, which comprise 763 913 NLI hypothesis-premise pairs. This base model outperforms almost all large models on the [ANLI benchmark](https://github.com/facebookresearch/anli). The base model is [DeBERTa-v3-base from Microsoft](https://huggingface.co/microsoft/deberta-v3-base). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf). For highest performance (but less speed), I recommend using https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli. ### How to use the model #### Simple zero-shot classification pipeline ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli") sequence_to_classify = "Angela Merkel is a politician in Germany and leader of the CDU" candidate_labels = ["politics", "economy", "entertainment", "environment"] output = classifier(sequence_to_classify, candidate_labels, multi_label=False) print(output) ``` #### NLI use-case ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data DeBERTa-v3-base-mnli-fever-anli was trained on the MultiNLI, Fever-NLI and Adversarial-NLI (ANLI) datasets, which comprise 763 913 NLI hypothesis-premise pairs. ### Training procedure DeBERTa-v3-base-mnli-fever-anli was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=3, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the test sets for MultiNLI and ANLI and the dev set for Fever-NLI. The metric used is accuracy. mnli-m | mnli-mm | fever-nli | anli-all | anli-r3 ---------|----------|---------|----------|---------- 0.903 | 0.903 | 0.777 | 0.579 | 0.495 ## Limitations and bias Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases. ## Citation If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k. ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ### Debugging and issues Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues. ## Model Recycling [Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=0.65&mnli_lp=nan&20_newsgroup=-0.61&ag_news=-0.01&amazon_reviews_multi=0.46&anli=0.84&boolq=2.12&cb=16.07&cola=-0.76&copa=8.60&dbpedia=-0.40&esnli=-0.29&financial_phrasebank=-1.98&imdb=-0.47&isear=-0.22&mnli=-0.21&mrpc=0.50&multirc=1.91&poem_sentiment=1.73&qnli=0.07&qqp=-0.37&rotten_tomatoes=-0.74&rte=3.94&sst2=-0.45&sst_5bins=0.07&stsb=1.27&trec_coarse=-0.16&trec_fine=0.18&tweet_ev_emoji=-0.93&tweet_ev_emotion=-1.33&tweet_ev_hate=-1.67&tweet_ev_irony=-5.46&tweet_ev_offensive=-0.17&tweet_ev_sentiment=-0.11&wic=-0.21&wnli=-1.20&wsc=4.18&yahoo_answers=-0.70&model_name=MoritzLaurer%2FDeBERTa-v3-base-mnli-fever-anli&base_name=microsoft%2Fdeberta-v3-base) using MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli as a base model yields average score of 79.69 in comparison to 79.04 by microsoft/deberta-v3-base. The model is ranked 2nd among all tested models for the microsoft/deberta-v3-base architecture as of 09/01/2023. Results: | 20_newsgroup | ag_news | amazon_reviews_multi | anli | boolq | cb | cola | copa | dbpedia | esnli | financial_phrasebank | imdb | isear | mnli | mrpc | multirc | poem_sentiment | qnli | qqp | rotten_tomatoes | rte | sst2 | sst_5bins | stsb | trec_coarse | trec_fine | tweet_ev_emoji | tweet_ev_emotion | tweet_ev_hate | tweet_ev_irony | tweet_ev_offensive | tweet_ev_sentiment | wic | wnli | wsc | yahoo_answers | |---------------:|----------:|-----------------------:|-------:|--------:|--------:|--------:|-------:|----------:|--------:|-----------------------:|-------:|--------:|--------:|--------:|----------:|-----------------:|-------:|--------:|------------------:|--------:|--------:|------------:|--------:|--------------:|------------:|-----------------:|-------------------:|----------------:|-----------------:|---------------------:|---------------------:|--------:|--------:|--------:|----------------:| | 85.8072 | 90.4333 | 67.32 | 59.625 | 85.107 | 91.0714 | 85.8102 | 67 | 79.0333 | 91.6327 | 82.5 | 94.02 | 71.6428 | 89.5749 | 89.7059 | 64.1708 | 88.4615 | 93.575 | 91.4148 | 89.6811 | 86.2816 | 94.6101 | 57.0588 | 91.5508 | 97.6 | 91.2 | 45.264 | 82.6179 | 54.5455 | 74.3622 | 84.8837 | 71.6949 | 71.0031 | 69.0141 | 68.2692 | 71.3333 | For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)
MoritzLaurer/DeBERTa-v3-base-mnli-fever-docnli-ling-2c
MoritzLaurer
deberta-v2
8
194
transformers
5
text-classification
true
false
false
mit
['en']
null
null
0
0
0
0
0
0
0
['text-classification', 'zero-shot-classification']
false
true
true
5,195
# DeBERTa-v3-base-mnli-fever-docnli-ling-2c ## Model description This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation). It is the only model in the model hub trained on 8 NLI datasets, including DocNLI with very long texts to learn long range reasoning. Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". The DocNLI merges the classes "neural" and "contradiction" into "not-entailment" to enable the inclusion of the DocNLI dataset. The base model is [DeBERTa-v3-base from Microsoft](https://huggingface.co/microsoft/deberta-v3-base). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf) as well as the [DeBERTa-V3 paper](https://arxiv.org/abs/2111.09543). For highest performance (but less speed), I recommend using https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli. ### How to use the model #### Simple zero-shot classification pipeline ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="MoritzLaurer/DeBERTa-v3-base-mnli-fever-docnli-ling-2c") sequence_to_classify = "Angela Merkel is a politician in Germany and leader of the CDU" candidate_labels = ["politics", "economy", "entertainment", "environment"] output = classifier(sequence_to_classify, candidate_labels, multi_label=False) print(output) ``` #### NLI use-case ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "MoritzLaurer/DeBERTa-v3-base-mnli-fever-docnli-ling-2c" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "not_entailment"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation). ### Training procedure DeBERTa-v3-small-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=3, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy. mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c | lingnli-2c ---------|----------|---------|----------|----------|------ 0.935 | 0.933 | 0.897 | 0.710 | 0.678 | 0.895 ## Limitations and bias Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases. ## Citation If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k. ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ### Debugging and issues Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues.
MoritzLaurer/DeBERTa-v3-base-mnli
MoritzLaurer
deberta-v2
8
271
transformers
2
zero-shot-classification
true
false
false
null
['en']
null
null
1
0
1
0
0
0
0
['text-classification', 'zero-shot-classification']
false
true
true
5,738
# DeBERTa-v3-base-mnli-fever-anli ## Model description This model was trained on the MultiNLI dataset, which consists of 392 702 NLI hypothesis-premise pairs. The base model is [DeBERTa-v3-base from Microsoft](https://huggingface.co/microsoft/deberta-v3-base). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf). For a more powerful model, check out [DeBERTa-v3-base-mnli-fever-anli](https://huggingface.co/MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli) which was trained on even more data. ## Intended uses & limitations #### How to use the model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model_name = "MoritzLaurer/DeBERTa-v3-base-mnli" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data This model was trained on the MultiNLI dataset, which consists of 392 702 NLI hypothesis-premise pairs. ### Training procedure DeBERTa-v3-base-mnli was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=5, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the matched test set and achieves 0.90 accuracy. ## Limitations and bias Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases. ### BibTeX entry and citation info If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub. ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ### Debugging and issues Note that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues. ## Model Recycling [Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=0.97&mnli_lp=nan&20_newsgroup=-0.39&ag_news=0.19&amazon_reviews_multi=0.10&anli=1.31&boolq=0.81&cb=8.93&cola=0.01&copa=13.60&dbpedia=-0.23&esnli=-0.51&financial_phrasebank=0.61&imdb=-0.26&isear=-0.35&mnli=-0.34&mrpc=1.24&multirc=1.50&poem_sentiment=-0.19&qnli=0.30&qqp=0.13&rotten_tomatoes=-0.55&rte=3.57&sst2=0.35&sst_5bins=0.39&stsb=1.10&trec_coarse=-0.36&trec_fine=-0.02&tweet_ev_emoji=1.11&tweet_ev_emotion=-0.35&tweet_ev_hate=1.43&tweet_ev_irony=-2.65&tweet_ev_offensive=-1.69&tweet_ev_sentiment=-1.51&wic=0.57&wnli=-2.61&wsc=9.95&yahoo_answers=-0.33&model_name=MoritzLaurer%2FDeBERTa-v3-base-mnli&base_name=microsoft%2Fdeberta-v3-base) using MoritzLaurer/DeBERTa-v3-base-mnli as a base model yields average score of 80.01 in comparison to 79.04 by microsoft/deberta-v3-base. The model is ranked 1st among all tested models for the microsoft/deberta-v3-base architecture as of 09/01/2023. Results: | 20_newsgroup | ag_news | amazon_reviews_multi | anli | boolq | cb | cola | copa | dbpedia | esnli | financial_phrasebank | imdb | isear | mnli | mrpc | multirc | poem_sentiment | qnli | qqp | rotten_tomatoes | rte | sst2 | sst_5bins | stsb | trec_coarse | trec_fine | tweet_ev_emoji | tweet_ev_emotion | tweet_ev_hate | tweet_ev_irony | tweet_ev_offensive | tweet_ev_sentiment | wic | wnli | wsc | yahoo_answers | |---------------:|----------:|-----------------------:|--------:|--------:|--------:|--------:|-------:|----------:|--------:|-----------------------:|-------:|--------:|--------:|--------:|----------:|-----------------:|--------:|--------:|------------------:|--------:|--------:|------------:|-------:|--------------:|------------:|-----------------:|-------------------:|----------------:|-----------------:|---------------------:|---------------------:|--------:|--------:|--------:|----------------:| | 86.0196 | 90.6333 | 66.96 | 60.0938 | 83.792 | 83.9286 | 86.5772 | 72 | 79.2 | 91.419 | 85.1 | 94.232 | 71.5124 | 89.4426 | 90.4412 | 63.7583 | 86.5385 | 93.8129 | 91.9144 | 89.8687 | 85.9206 | 95.4128 | 57.3756 | 91.377 | 97.4 | 91 | 47.302 | 83.6031 | 57.6431 | 77.1684 | 83.3721 | 70.2947 | 71.7868 | 67.6056 | 74.0385 | 71.7 | For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)
MoritzLaurer/DeBERTa-v3-small-mnli-fever-docnli-ling-2c
MoritzLaurer
deberta-v2
8
6
transformers
0
text-classification
true
false
false
null
['en']
null
null
0
0
0
0
0
0
0
['text-classification', 'zero-shot-classification']
false
true
true
4,332
# DeBERTa-v3-small-mnli-fever-docnli-ling-2c ## Model description This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation). It is the only model in the model hub trained on 8 NLI datasets, including DocNLI with very long texts to learn long range reasoning. Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". The DocNLI merges the classes "neural" and "contradiction" into "not-entailment" to create more training data. The base model is [DeBERTa-v3-small from Microsoft](https://huggingface.co/microsoft/deberta-v3-small). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf) as well as the [DeBERTa-V3 paper](https://arxiv.org/abs/2111.09543). ## Intended uses & limitations #### How to use the model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model_name = "MoritzLaurer/DeBERTa-v3-small-mnli-fever-docnli-ling-2c" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation). ### Training procedure DeBERTa-v3-small-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=3, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy. mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c ---------|----------|---------|----------|---------- 0.927 | 0.921 | 0.892 | 0.684 | 0.673 ## Limitations and bias Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases. ### BibTeX entry and citation info If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub. ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ### Debugging and issues Note that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues.
MoritzLaurer/DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary
MoritzLaurer
deberta-v2
8
1,998
transformers
2
zero-shot-classification
true
false
false
mit
['en']
['multi_nli', 'anli', 'fever', 'lingnli']
null
0
0
0
0
0
0
0
['text-classification', 'zero-shot-classification']
false
true
true
4,522
# DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary ## Model description This model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [ANLI](https://github.com/facebookresearch/anli). Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". This is specifically designed for zero-shot classification, where the difference between "neutral" and "contradiction" is irrelevant. The base model is [DeBERTa-v3-xsmall from Microsoft](https://huggingface.co/microsoft/deberta-v3-xsmall). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see the [DeBERTa-V3 paper](https://arxiv.org/abs/2111.09543). For highest performance (but less speed), I recommend using https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli. ## Intended uses & limitations #### How to use the model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "MoritzLaurer/DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "not_entailment"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data This model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [ANLI](https://github.com/facebookresearch/anli). ### Training procedure DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=5, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the binary test sets for MultiNLI, ANLI, LingNLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy. dataset | mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c | lingnli-2c --------|---------|----------|---------|----------|----------|------ accuracy | 0.925 | 0.922 | 0.892 | 0.676 | 0.665 | 0.888 speed (text/sec, CPU, 128 batch) | 6.0 | 6.3 | 3.0 | 5.8 | 5.0 | 7.6 speed (text/sec, GPU Tesla P100, 128 batch) | 473 | 487 | 230 | 390 | 340 | 586 ## Limitations and bias Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases. ## Citation If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k. ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ### Debugging and issues Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues.
MoritzLaurer/MiniLM-L6-mnli-binary
MoritzLaurer
bert
8
150
transformers
1
text-classification
true
false
false
null
['en']
null
null
0
0
0
0
0
0
0
['text-classification', 'zero-shot-classification']
false
true
true
2,346
# MiniLM-L6-mnli-binary ## Model description This model was trained on the [MultiNLI](https://huggingface.co/datasets/multi_nli) dataset. The model was trained for binary NLI, which means that the "neutral" and "contradiction" classes were merged into one class. The model therefore predicts "entailment" or "not_entailment". The base model is MiniLM-L6 from Microsoft, which is very fast, but a bit less accurate than other models. ## Intended uses & limitations #### How to use the model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model_name = "MoritzLaurer/MiniLM-L6-mnli-binary" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I liked the movie" hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "not_entailment"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data [MultiNLI](https://huggingface.co/datasets/multi_nli). ### Training procedure MiniLM-L6-mnli-binary was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=5, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the binary (matched) test set from MultiNLI. Accuracy: 0.886 ## Limitations and bias Please consult the original MiniLM paper and literature on different NLI datasets for potential biases. ### BibTeX entry and citation info If you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
MoritzLaurer/MiniLM-L6-mnli-fever-docnli-ling-2c
MoritzLaurer
bert
8
12
transformers
1
text-classification
true
false
false
null
['en']
null
null
0
0
0
0
0
0
0
['text-classification', 'zero-shot-classification']
false
true
true
3,740
# MiniLM-L6-mnli-fever-docnli-ling-2c ## Model description This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation). It is the only model in the model hub trained on 8 NLI datasets, including DocNLI with very long texts to learn long range reasoning. Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". The DocNLI merges the classes "neural" and "contradiction" into "not-entailment" to create more training data. The base model is MiniLM-L6 from Microsoft. Which is very fast, but a bit less accurate than other models. ## Intended uses & limitations #### How to use the model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model_name = "MoritzLaurer/MiniLM-L6-mnli-fever-docnli-ling-2c" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "not_entailment"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation). ### Training procedure MiniLM-L6-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=3, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy. mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c ---------|----------|---------|----------|---------- (to upload) ## Limitations and bias Please consult the original MiniLM paper and literature on different NLI datasets for potential biases. ### BibTeX entry and citation info If you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub. ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m.laurer{at}vu.nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
MoritzLaurer/MiniLM-L6-mnli
MoritzLaurer
bert
8
3
transformers
0
text-classification
true
false
false
null
['en']
null
null
0
0
0
0
0
0
0
['text-classification', 'zero-shot-classification']
false
true
true
2,147
# MiniLM-L6-mnli ## Model description This model was trained on the [MultiNLI](https://huggingface.co/datasets/multi_nli) dataset. The base model is MiniLM-L6 from Microsoft, which is very fast, but a bit less accurate than other models. ## Intended uses & limitations #### How to use the model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model_name = "MoritzLaurer/MiniLM-L6-mnli" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I liked the movie" hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data [MultiNLI](https://huggingface.co/datasets/multi_nli). ### Training procedure MiniLM-L6-mnli-binary was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=5, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the (matched) test set from MultiNLI. Accuracy: 0.814 ## Limitations and bias Please consult the original MiniLM paper and literature on different NLI datasets for potential biases. ### BibTeX entry and citation info If you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
MoritzLaurer/covid-policy-roberta-21
MoritzLaurer
roberta
12
4
transformers
1
text-classification
true
false
true
null
['en']
null
null
0
0
0
0
0
0
0
['text-classification']
false
true
true
439
# Covid-Policy-RoBERTa-21 This model is currently in development at the Centre for European Policy Studies (CEPS). The model is not yet recommended for use. A more detailed description will follow. If you are interested in using deep learning to identify 20 different types policy measures against COVID-19 in text (NPIs, "non-pharmaceutical interventions") don't hesitate to [contact me](https://www.ceps.eu/ceps-staff/moritz-laurer/).
MoritzLaurer/mDeBERTa-v3-base-mnli-xnli
MoritzLaurer
deberta-v2
8
432,624
transformers
84
zero-shot-classification
true
false
false
mit
['multilingual', 'en', 'ar', 'bg', 'de', 'el', 'es', 'fr', 'hi', 'ru', 'sw', 'th', 'tr', 'ur', 'vi', 'zh']
['multi_nli', 'xnli']
null
0
0
0
0
1
0
1
['zero-shot-classification', 'text-classification', 'nli', 'pytorch']
false
true
true
5,846
# Multilingual mDeBERTa-v3-base-mnli-xnli ## Model description This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying model was pre-trained by Microsoft on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli). As of December 2021, mDeBERTa-base is the best performing multilingual base-sized transformer model, introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf). If you are looking for a smaller, faster (but less performant) model, you can try [multilingual-MiniLMv2-L6-mnli-xnli](https://huggingface.co/MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli). ### How to use the model #### Simple zero-shot classification pipeline ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="MoritzLaurer/mDeBERTa-v3-base-mnli-xnli") sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU" candidate_labels = ["politics", "economy", "entertainment", "environment"] output = classifier(sequence_to_classify, candidate_labels, multi_label=False) print(output) ``` #### NLI use-case ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "MoritzLaurer/mDeBERTa-v3-base-mnli-xnli" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU" hypothesis = "Emmanuel Macron is the President of France" input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data This model was trained on the XNLI development dataset and the MNLI train dataset. The XNLI development set consists of 2490 professionally translated texts from English to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)). Note that the XNLI contains a training set of 15 machine translated versions of the MNLI dataset for 15 languages, but due to quality issues with these machine translations, this model was only trained on the professional translations from the XNLI development set and the original English MNLI training set (392 702 texts). Not using machine translated texts can avoid overfitting the model to the 15 languages; avoids catastrophic forgetting of the other 85 languages mDeBERTa was pre-trained on; and significantly reduces training costs. ### Training procedure mDeBERTa-v3-base-mnli-xnli was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=2, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=16, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay ) ``` ### Eval results The model was evaluated on the XNLI test set on 15 languages (5010 texts per language, 75150 in total). Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data in the specific language (cross-lingual transfer). This means that the model is also able of doing NLI on the other 85 languages mDeBERTa was training on, but performance is most likely lower than for those languages available in XNLI. Also note that if other multilingual models on the model hub claim performance of around 90% on languages other than English, the authors have most likely made a mistake during testing since non of the latest papers shows a multilingual average performance of more than a few points above 80% on XNLI (see [here](https://arxiv.org/pdf/2111.09543.pdf) or [here](https://arxiv.org/pdf/1911.02116.pdf)). average | ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vi | zh ---------|----------|---------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|---------- 0.808 | 0.802 | 0.829 | 0.825 | 0.826 | 0.883 | 0.845 | 0.834 | 0.771 | 0.813 | 0.748 | 0.793 | 0.807 | 0.740 | 0.795 | 0.8116 ## Limitations and bias Please consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases. ## Citation If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k. ## Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ## Debugging and issues Note that DeBERTa-v3 was released in late 2021 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 or higher might solve some issues. Note that mDeBERTa currently does not support FP16, see here: https://github.com/microsoft/DeBERTa/issues/77