modelId
stringlengths
4
81
tags
sequence
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
Declan/NPR_model_v6
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: cc-by-4.0 --- # BART-base fine-tuned on NaturalQuestions for **Question Generation** [BART Model](https://arxiv.org/pdf/1910.13461.pdf) fine-tuned on [Google NaturalQuestions](https://ai.google.com/research/NaturalQuestions/) for **Question Generation** by treating long answer as input, and question as output. ## Details of BART The **BART** model was presented in [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by *Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer* in Here the abstract: We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance. ## Details of the downstream task (QG) - Dataset 📚 🧐 Dataset: ```NaturalQuestions``` from Google (https://ai.google.com/research/NaturalQuestions/) | Dataset | Split | # samples | | -------- | ----- | --------- | | NaturalQuestions | train | 97650 | | NaturalQuestions | valid | 10850 | ## Model fine-tuning 🏋️‍ The training script can be found [here](https://github.com/McGill-NLP/MLQuestions/blob/main/QG/train.py) ## Model in Action 🚀 ```python from transformers import AutoModel, BartTokenizer #Load the tokenizer tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') #Load the model model = AutoModelForSeq2SeqLM.from_pretrained("McGill-NLP/bart-qg-nq-checkpoint") ``` ## Citation If you want to cite this model you can use this: ```bibtex @inproceedings{kulshreshtha-etal-2021-back, title = "Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval", author = "Kulshreshtha, Devang and Belfer, Robert and Serban, Iulian Vlad and Reddy, Siva", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.566", pages = "7064--7078", abstract = "In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.", } ``` > Created by [Devang Kulshreshtha](https://geekydevu.netlify.app/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
Declan/NewYorkPost_model_v1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Fatima Fellowship Quick Coding Challenge (Pick 1): - Deep Learning for Vision
Declan/NewYorkTimes_model_v1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer model-index: - name: poem-gen-spanish-t5-small-d2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # poem-gen-spanish-t5-small-d2 This model is a fine-tuned version of [flax-community/spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9027 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 3.223 | 0.73 | 30000 | 3.1479 | | 3.0109 | 1.46 | 60000 | 3.0544 | | 2.8649 | 2.19 | 90000 | 2.9730 | | 2.7603 | 2.93 | 120000 | 2.9301 | | 2.6343 | 3.66 | 150000 | 2.9188 | | 2.5094 | 4.39 | 180000 | 2.9064 | | 2.391 | 5.12 | 210000 | 2.9073 | | 2.3592 | 5.85 | 240000 | 2.9022 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
Declan/NewYorkTimes_model_v4
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: canine-c-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.0990441507705203 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # canine-c-finetuned-cola This model is a fine-tuned version of [google/canine-c](https://huggingface.co/google/canine-c) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6246 - Matthews Correlation: 0.0990 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6142 | 1.0 | 535 | 0.6268 | 0.0 | | 0.607 | 2.0 | 1070 | 0.6234 | 0.0 | | 0.6104 | 3.0 | 1605 | 0.6226 | 0.0 | | 0.5725 | 4.0 | 2140 | 0.6246 | 0.0990 | | 0.5426 | 5.0 | 2675 | 0.6866 | 0.0495 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
Declan/Politico_model_v1
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
2022-04-01T17:59:52Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: juaner/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # juaner/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1909 - Validation Loss: 0.5553 - Train Matthews Correlation: 0.5279 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2670, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5191 | 0.4491 | 0.4718 | 0 | | 0.3270 | 0.4571 | 0.5196 | 1 | | 0.1909 | 0.5553 | 0.5279 | 2 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.0
Declan/Politico_model_v4
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.936 - name: F1 type: f1 value: 0.9361334972007946 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2205 - Accuracy: 0.936 - F1: 0.9361 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.0442 | 1.0 | 250 | 0.2392 | 0.926 | 0.9265 | | 0.0463 | 2.0 | 500 | 0.2205 | 0.936 | 0.9361 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
Declan/Politico_model_v6
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
pssteval INFO: ASR metrics for split `valid` FER: 9.8% PER: 20.9%
Declan/Reuters_model_v3
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- language: en datasets: - Crunchbase --- # Company Classifier This fine-tuned Distilbert model is using company descriptions for classification. The model is tasked to classify the company as either finance or biotech. The demo can be found on my profile under Spaces (https://huggingface.co/erikacardenas300). I hope you enjoy it!
Declan/Reuters_model_v6
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - abd-1999/autotrain-data-bbc-news-summarization co2_eq_emissions: 2313.4037079026934 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 694821095 - CO2 Emissions (in grams): 2313.4037079026934 ## Validation Metrics - Loss: 3.0294156074523926 - Rouge1: 2.1467 - Rouge2: 0.0853 - RougeL: 2.1524 - RougeLsum: 2.1534 - Gen Len: 18.5603 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/abd-1999/autotrain-bbc-news-summarization-694821095 ```
Declan/Reuters_model_v8
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikihow metrics: - rouge model-index: - name: t5-small-finetuned-wikihow_3epoch results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wikihow type: wikihow args: all metrics: - name: Rouge1 type: rouge value: 25.5784 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-wikihow_3epoch This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset. It achieves the following results on the evaluation set: - Loss: 2.5163 - Rouge1: 25.5784 - Rouge2: 8.9929 - Rougel: 21.5345 - Rougelsum: 24.9382 - Gen Len: 18.384 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.9421 | 0.25 | 5000 | 2.6545 | 23.2336 | 7.5502 | 19.5899 | 22.5521 | 18.4076 | | 2.8411 | 0.51 | 10000 | 2.6103 | 24.3524 | 8.2068 | 20.5238 | 23.6679 | 18.2606 | | 2.7983 | 0.76 | 15000 | 2.5836 | 24.8169 | 8.4826 | 20.8765 | 24.1686 | 18.3211 | | 2.7743 | 1.02 | 20000 | 2.5627 | 24.9904 | 8.5625 | 21.0344 | 24.3416 | 18.3786 | | 2.7452 | 1.27 | 25000 | 2.5508 | 25.1497 | 8.6872 | 21.152 | 24.4751 | 18.3524 | | 2.7353 | 1.53 | 30000 | 2.5384 | 25.2909 | 8.7408 | 21.2344 | 24.629 | 18.4453 | | 2.7261 | 1.78 | 35000 | 2.5322 | 25.3748 | 8.7802 | 21.312 | 24.7191 | 18.3754 | | 2.7266 | 2.03 | 40000 | 2.5265 | 25.4095 | 8.8915 | 21.3871 | 24.7685 | 18.4013 | | 2.706 | 2.29 | 45000 | 2.5211 | 25.4372 | 8.8926 | 21.4124 | 24.7902 | 18.3776 | | 2.7073 | 2.54 | 50000 | 2.5176 | 25.4925 | 8.9668 | 21.5103 | 24.8608 | 18.4303 | | 2.703 | 2.8 | 55000 | 2.5163 | 25.5784 | 8.9929 | 21.5345 | 24.9382 | 18.384 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
DeepChem/ChemBERTa-77M-MLM
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2,416
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.926 - name: F1 type: f1 value: 0.9260113300845928 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2280 - Accuracy: 0.926 - F1: 0.9260 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8646 | 1.0 | 250 | 0.3326 | 0.9045 | 0.9009 | | 0.2663 | 2.0 | 500 | 0.2280 | 0.926 | 0.9260 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
DeepPavlov/bert-base-cased-conversational
[ "pytorch", "jax", "bert", "feature-extraction", "en", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3,009
null
--- title: DualStyleGAN emoji: 👀 colorFrom: green colorTo: gray sdk: gradio sdk_version: 2.8.13 app_file: app.py pinned: false --- Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
DeltaHub/adapter_t5-3b_mrpc
[ "pytorch", "transformers" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: en thumbnail: http://www.huggingtweets.com/clortown/1648875085007/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1488574779351187458/RlIQNUFG_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">yeosang elf agenda</div> <div style="text-align: center; font-size: 14px;">@clortown</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from yeosang elf agenda. | Data | yeosang elf agenda | | --- | --- | | Tweets downloaded | 3140 | | Retweets | 538 | | Short tweets | 463 | | Tweets kept | 2139 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cupnlna/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @clortown's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/uii743r9) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/uii743r9/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/clortown') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
DeltaHub/adapter_t5-3b_qnli
[ "pytorch", "transformers" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: vliegmachine results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.5970149040222168 --- # vliegmachine Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### f117 ![f117](images/f117.jpg) #### f16 ![f16](images/f16.jpg) #### f18 ![f18](images/f18.jpg)
Denilson/gbert-base-germaner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit inference: False --- # training logs - https://wandb.ai/junyu/huggingface/runs/1jg2jlgt # install - https://github.com/JunnYu/FLASHQuad_pytorch # usage ```python import torch from flash import FLASHForMaskedLM from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained("junnyu/flash_small_wwm_cluecorpussmall") model = FLASHForMaskedLM.from_pretrained("junnyu/flash_small_wwm_cluecorpussmall") model.eval() text = "天气预报说今天的天[MASK]很好,那么我[MASK]一起去公园玩吧!" inputs = tokenizer(text, return_tensors="pt", padding="max_length", max_length=512, return_token_type_ids=False) #这里必须是512,不然结果可能不对。 with torch.no_grad(): pt_outputs = model(**inputs).logits[0] pt_outputs_sentence = "pytorch: " for i, id in enumerate(tokenizer.encode(text)): if id == tokenizer.mask_token_id: val,idx = pt_outputs[i].softmax(-1).topk(k=5) tokens = tokenizer.convert_ids_to_tokens(idx) new_tokens = [] for v,t in zip(val.cpu(),tokens): new_tokens.append(f"{t}+{round(v.item(),4)}") pt_outputs_sentence += "[" + "||".join(new_tokens) + "]" else: pt_outputs_sentence += "".join( tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True)) print(pt_outputs_sentence) # pytorch: 天气预报说今天的天[气+0.994||天+0.0015||空+0.0014||晴+0.0005||阳+0.0003]很好,那么我[们+0.9563||就+0.0381||也+0.0032||俩+0.0004||来+0.0002]一起去公园玩吧! ```
Deniskin/essays_small_2000i
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: es datasets: - squad_es - hackathon-pln-es/biomed_squad_es_v2 metrics: - "f1" --- # biomedtra-small for QA This model was trained as part of the "Extractive QA Biomedicine" project developed during the 2022 [Hackathon](https://somosnlp.org/hackathon) organized by SOMOS NLP. ## Motivation Recent research has made available Spanish Language Models trained on Biomedical corpus. This project explores the use of these new models to generate extractive Question Answering models for Biomedicine, and compares their effectiveness with general masked language models. The models trained during the [Hackathon](https://somosnlp.org/hackathon) were: [hackathon-pln-es/roberta-base-bne-squad2-es](https://huggingface.co/hackathon-pln-es/roberta-base-bne-squad2-es) [hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es](https://huggingface.co/hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es) [hackathon-pln-es/roberta-base-biomedical-es-squad2-es](https://huggingface.co/hackathon-pln-es/roberta-base-biomedical-es-squad2-es) [hackathon-pln-es/biomedtra-small-es-squad2-es](https://huggingface.co/hackathon-pln-es/biomedtra-small-es-squad2-es) ## Description This model is a fine-tuned version of [mrm8488/biomedtra-small-es](https://huggingface.co/mrm8488/biomedtra-small-es) on the [squad_es (v2)](https://huggingface.co/datasets/squad_es) training dataset. ## Hyperparameters The hyperparameters were chosen based on those used in [deepset/electra-base-squad2](https://huggingface.co/deepset/electra-base-squad2), an english-based model trained for similar purposes ``` --num_train_epochs 10 \ --learning_rate 1e-4 \ --max_seq_length 384 \ --doc_stride 128 \ ``` ## Performance Evaluated on the [hackathon-pln-es/biomed_squad_es_v2](https://huggingface.co/datasets/hackathon-pln-es/biomed_squad_es_v2) dev set. |Model |Base Model Domain|exact |f1 |HasAns_exact|HasAns_f1|NoAns_exact|NoAns_f1| |--------------------------------------------------------------|-----------------|-------|-------|------------|---------|-----------|--------| |hackathon-pln-es/roberta-base-bne-squad2-es |General |67.6341|75.6988|53.7367 |70.0526 |81.2174 |81.2174 | |hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es|Biomedical |66.8426|75.2346|53.0249 |70.0031 |80.3478 |80.3478 | |hackathon-pln-es/roberta-base-biomedical-es-squad2-es |Biomedical |67.6341|74.5612|47.6868 |61.7012 |87.1304 | 87.1304| |hackathon-pln-es/biomedtra-small-es-squad2-es |Biomedical |34.4767|44.3294|45.3737 |65.307 |23.8261 |23.8261 | ## Team Santiago Maximo: [smaximo](https://huggingface.co/smaximo)
Deniskin/gpt3_medium
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
52
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-hindi-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hindi-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
Denver/distilbert-base-uncased-finetuned-squad
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-04-02T03:47:54Z
--- language: es datasets: - squad_es - hackathon-pln-es/biomed_squad_es_v2 metrics: - "f1" --- # roberta-base-biomedical-clinical-es for QA This model was trained as part of the "Extractive QA Biomedicine" project developed during the 2022 [Hackathon](https://somosnlp.org/hackathon) organized by SOMOS NLP. ## Motivation Recent research has made available Spanish Language Models trained on Biomedical corpus. This project explores the use of these new models to generate extractive Question Answering models for Biomedicine, and compares their effectiveness with general masked language models. The models trained during the [Hackathon](https://somosnlp.org/hackathon) were: [hackathon-pln-es/roberta-base-bne-squad2-es](https://huggingface.co/hackathon-pln-es/roberta-base-bne-squad2-es) [hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es](https://huggingface.co/hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es) [hackathon-pln-es/roberta-base-biomedical-es-squad2-es](https://huggingface.co/hackathon-pln-es/roberta-base-biomedical-es-squad2-es) [hackathon-pln-es/biomedtra-small-es-squad2-es](https://huggingface.co/hackathon-pln-es/biomedtra-small-es-squad2-es) ## Description This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the [squad_es (v2)](https://huggingface.co/datasets/squad_es) training dataset. ## Hyperparameters The hyperparameters were chosen based on those used in [PlanTL-GOB-ES/roberta-base-bne-sqac](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-sqac), a spanish-based QA model trained on a dataset with SQUAD v1 fromat. ``` --num_train_epochs 2 --learning_rate 3e-5 --weight_decay 0.01 --max_seq_length 386 --doc_stride 128 ``` ## Performance Evaluated on the [hackathon-pln-es/biomed_squad_es_v2](https://huggingface.co/datasets/hackathon-pln-es/biomed_squad_es_v2) dev set. |Model |Base Model Domain|exact |f1 |HasAns_exact|HasAns_f1|NoAns_exact|NoAns_f1| |--------------------------------------------------------------|-----------------|-------|-------|------------|---------|-----------|--------| |hackathon-pln-es/roberta-base-bne-squad2-es |General |67.6341|75.6988|53.7367 |70.0526 |81.2174 |81.2174 | |hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es|Biomedical |66.8426|75.2346|53.0249 |70.0031 |80.3478 |80.3478 | |hackathon-pln-es/roberta-base-biomedical-es-squad2-es |Biomedical |67.6341|74.5612|47.6868 |61.7012 |87.1304 | 87.1304| |hackathon-pln-es/biomedtra-small-es-squad2-es |Biomedical |34.4767|44.3294|45.3737 |65.307 |23.8261 |23.8261 | ## Team Santiago Maximo: [smaximo](https://huggingface.co/smaximo)
DeskDown/MarianMixFT_en-hi
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6432 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7607 | 1.0 | 2334 | 3.6664 | | 3.6323 | 2.0 | 4668 | 3.6461 | | 3.6075 | 3.0 | 7002 | 3.6432 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
DeskDown/MarianMixFT_en-id
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: mit tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: paper_feedback_intent results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # paper_feedback_intent This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3621 - Accuracy: 0.9302 - Precision: 0.9307 - Recall: 0.9302 - F1: 0.9297 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.9174 | 1.0 | 11 | 0.7054 | 0.7907 | 0.7903 | 0.7907 | 0.7861 | | 0.6917 | 2.0 | 22 | 0.4665 | 0.8140 | 0.8134 | 0.8140 | 0.8118 | | 0.4276 | 3.0 | 33 | 0.3326 | 0.9070 | 0.9065 | 0.9070 | 0.9041 | | 0.2656 | 4.0 | 44 | 0.3286 | 0.9070 | 0.9065 | 0.9070 | 0.9041 | | 0.1611 | 5.0 | 55 | 0.3044 | 0.9302 | 0.9307 | 0.9302 | 0.9297 | | 0.1025 | 6.0 | 66 | 0.3227 | 0.9302 | 0.9307 | 0.9302 | 0.9297 | | 0.0799 | 7.0 | 77 | 0.3216 | 0.9302 | 0.9307 | 0.9302 | 0.9297 | | 0.0761 | 8.0 | 88 | 0.3529 | 0.9302 | 0.9307 | 0.9302 | 0.9297 | | 0.0479 | 9.0 | 99 | 0.3605 | 0.9302 | 0.9307 | 0.9302 | 0.9297 | | 0.0358 | 10.0 | 110 | 0.3621 | 0.9302 | 0.9307 | 0.9302 | 0.9297 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
DeskDown/MarianMixFT_en-ja
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/780200431859269633/kXZwDd_Y_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Romantic Poetry Bot</div> <div style="text-align: center; font-size: 14px;">@percybotshelley</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Romantic Poetry Bot. | Data | Romantic Poetry Bot | | --- | --- | | Tweets downloaded | 3205 | | Retweets | 0 | | Short tweets | 20 | | Tweets kept | 3185 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bj4pakr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @percybotshelley's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2yfs8v92) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2yfs8v92/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/percybotshelley') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Devid/DialoGPT-small-Miku
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: apache-2.0 tags: - accelerator metrics: - accuracy model-index: - name: finetuned-vit-base-patch16-224-upside-down-detector results: [] widget: - src: https://huggingface.co/jaygala24/finetuned-vit-base-patch16-224-upside-down-detector/resolve/main/original.jpg example_title: original - src: https://huggingface.co/jaygala24/finetuned-vit-base-patch16-224-upside-down-detector/resolve/main/upside_down.jpg example_title: upside_down --- # finetuned-vit-base-patch16-224-upside-down-detector This model is a fine-tuned version of [vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the custom image orientation dataset adapted from the [beans](https://huggingface.co/datasets/beans) dataset. It achieves the following results on the evaluation set: - Accuracy: 0.8947 ## Training and evaluation data The custom dataset for image orientation adapted from [beans](https://huggingface.co/datasets/beans) dataset contains a total of 2,590 image samples with 1,295 original and 1,295 upside down. The model was fine-tuned on the train subset and evaluated on validation and test subsets. The dataset splits are listed below: | Split | # examples | |:----------:|:----------:| | train | 2068 | | validation | 133 | | test | 128 | ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-04 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 32 - num_epochs: 5 ### Training results | Epoch | Accuracy | |:----------:|:----------:| | 0 | 0.8609 | | 1 | 0.8835 | | 2 | 0.8571 | | 3 | 0.8941 | | 4 | 0.8941 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.9.0+cu111 - Pytorch/XLA 1.9 - Datasets 2.0.0 - Tokenizers 0.12.0
DewiBrynJones/wav2vec2-large-xlsr-welsh
[ "cy", "dataset:common_voice", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - semantic-search - chinese --- # DMetaSoul/sbert-chinese-general-v1-distill 此模型是之前[开源通用语义匹配模型](https://huggingface.co/DMetaSoul/sbert-chinese-general-v1)的蒸馏版本(仅4层 BERT),适用于**通用语义匹配**场景(此模型在 Chinese-STS 任务上效果较好,但在其它任务上效果并非最优,存在一定过拟合风险),比如文本特征抽取、文本向量聚类、文本语义搜索等业务场景。 离线训练好的大模型如果直接用于线上推理,对计算资源有苛刻的需求,而且难以满足业务环境对延迟、吞吐量等性能指标的要求,这里我们使用蒸馏手段来把大模型轻量化。从 12 层 BERT 蒸馏为 4 层后,模型参数量缩小到 44%,大概 latency 减半、throughput 翻倍、精度下降 3% 左右(具体结果详见下文评估小节)。 # Usage ## 1. Sentence-Transformers 通过 [sentence-transformers](https://www.SBERT.net) 框架来使用该模型,首先进行安装: ``` pip install -U sentence-transformers ``` 然后使用下面的代码来载入该模型并进行文本表征向量的提取: ```python from sentence_transformers import SentenceTransformer sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"] model = SentenceTransformer('DMetaSoul/sbert-chinese-general-v1-distill') embeddings = model.encode(sentences) print(embeddings) ``` ## 2. HuggingFace Transformers 如果不想使用 [sentence-transformers](https://www.SBERT.net) 的话,也可以通过 HuggingFace Transformers 来载入该模型并进行文本向量抽取: ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('DMetaSoul/sbert-chinese-general-v1-distill') model = AutoModel.from_pretrained('DMetaSoul/sbert-chinese-general-v1-distill') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation 这里主要跟蒸馏前对应的 teacher 模型作了对比 *性能:* | | Teacher | Student | Gap | | ---------- | --------------------- | ------------------- | ----- | | Model | BERT-12-layers (102M) | BERT-4-layers (45M) | 0.44x | | Cost | 23s | 12s | -47% | | Latency | 37ms | 20ms | -46% | | Throughput | 422 sentence/s | 788 sentence/s | 1.8x | *精度:* | | **csts_dev** | **csts_test** | **afqmc** | **lcqmc** | **bqcorpus** | **pawsx** | **xiaobu** | **Avg** | | -------------- | ------------ | ------------- | --------- | --------- | ------------ | --------- | ---------- | ------- | | **Teacher** | 84.54% | 82.17% | 23.80% | 65.94% | 45.52% | 11.52% | 48.51% | 51.71% | | **Student** | 83.39% | 79.96% | 20.25% | 63.39% | 43.70% | 7.54% | 46.91% | 49.28% | | **Gap** (abs.) | - | - | - | - | - | - | - | -2.43% | *基于1万条数据测试,GPU设备是V100,batch_size=16,max_seq_len=256* ## Citing & Authors E-mail: [email protected]
Dhito/am
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en widget: - text: 'define "toecoin": toecoin rose by 200% after Elon Musk mentioned it in his tweet' datasets: - 'marksverdhei/wordnet-definitions-en-2021' --- # T5-define (This model is still a work in progress. If you use it for fine tuning, make sure to save a local copy) This model is trained to generate word definitions based on the word and a context, using a subset of wordnet for all words that have an example and definition. The model uses task prompts on the format 'define "[word]": [example sentence]' This model in particular is a one-shot learner for unseen words, as it has to infer the definition by only one example How to run: ```python from transformers import T5ForConditionalGeneration, T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("marksverdhei/t5-base-define") model = T5ForConditionalGeneration.from_pretrained("marksverdhei/t5-base-define") prompt = "define \"noseplow\": The children hid as the noseplow drove across the street" ids = tokenizer(prompt, return_tensors="pt").input_ids generated_tokens = model.generate(ids)[0][1:-1] print(tokenizer.decode(generated_tokens)) ``` See the gist for the source code to used to train the model: https://gist.github.com/marksverdhei/0a13f67e65460b71c05fcf558a6a91ae
Dhruva/Interstellar
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - semantic-search - chinese --- # DMetaSoul/sbert-chinese-general-v2-distill 此模型是之前[开源通用语义匹配模型](https://huggingface.co/DMetaSoul/sbert-chinese-general-v2)的蒸馏版本(仅4层 BERT),适用于**通用语义匹配**场景,从效果来看该模型在各种任务上**泛化能力更好且编码速度更快**。 离线训练好的大模型如果直接用于线上推理,对计算资源有苛刻的需求,而且难以满足业务环境对延迟、吞吐量等性能指标的要求,这里我们使用蒸馏手段来把大模型轻量化。从 12 层 BERT 蒸馏为 4 层后,模型参数量缩小到 44%,大概 latency 减半、throughput 翻倍、精度下降 6% 左右(具体结果详见下文评估小节)。 # Usage ## 1. Sentence-Transformers 通过 [sentence-transformers](https://www.SBERT.net) 框架来使用该模型,首先进行安装: ``` pip install -U sentence-transformers ``` 然后使用下面的代码来载入该模型并进行文本表征向量的提取: ```python from sentence_transformers import SentenceTransformer sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"] model = SentenceTransformer('DMetaSoul/sbert-chinese-general-v2-distill') embeddings = model.encode(sentences) print(embeddings) ``` ## 2. HuggingFace Transformers 如果不想使用 [sentence-transformers](https://www.SBERT.net) 的话,也可以通过 HuggingFace Transformers 来载入该模型并进行文本向量抽取: ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('DMetaSoul/sbert-chinese-general-v2-distill') model = AutoModel.from_pretrained('DMetaSoul/sbert-chinese-general-v2-distill') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation 这里主要跟蒸馏前对应的 teacher 模型作了对比: *性能:* | | Teacher | Student | Gap | | ---------- | --------------------- | ------------------- | ----- | | Model | BERT-12-layers (102M) | BERT-4-layers (45M) | 0.44x | | Cost | 23s | 12s | -47% | | Latency | 38ms | 20ms | -47% | | Throughput | 418 sentence/s | 791 sentence/s | 1.9x | *精度:* | | **csts_dev** | **csts_test** | **afqmc** | **lcqmc** | **bqcorpus** | **pawsx** | **xiaobu** | **Avg** | | -------------- | ------------ | ------------- | --------- | --------- | ------------ | --------- | ---------- | ------- | | **Teacher** | 77.19% | 72.59% | 36.79% | 76.91% | 49.62% | 16.24% | 63.15% | 56.07% | | **Student** | 76.49% | 73.33% | 26.46% | 64.26% | 46.02% | 11.83% | 52.45% | 50.12% | | **Gap** (abs.) | - | - | - | - | - | - | - | -5.95% | *基于1万条数据测试,GPU设备是V100,batch_size=16,max_seq_len=256* ## Citing & Authors E-mail: [email protected]
Dibyaranjan/nl_image_search
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - semantic-search - chinese --- # DMetaSoul/sbert-chinese-qmc-domain-v1 此模型是基于之前开源[问题匹配模型](https://huggingface.co/DMetaSoul/sbert-chinese-qmc-domain-v1)的蒸馏轻量化版本(仅含4层 BERT),适用于**开放领域的问题匹配**场景,比如: - 洗澡用什么香皂好?vs. 洗澡用什么香皂好 - 大连哪里拍婚纱照好点? vs. 大连哪里拍婚纱照比较好 - 银行卡怎样挂失?vs. 银行卡丢了怎么挂失啊? 离线训练好的大模型如果直接用于线上推理,对计算资源有苛刻的需求,而且难以满足业务环境对延迟、吞吐量等性能指标的要求,这里我们使用蒸馏手段来把大模型轻量化。从 12 层 BERT 蒸馏为 4 层后,模型参数量缩小到 44%,大概 latency 减半、throughput 翻倍、精度下降 4% 左右(具体结果详见下文评估小节)。 # Usage ## 1. Sentence-Transformers 通过 [sentence-transformers](https://www.SBERT.net) 框架来使用该模型,首先进行安装: ``` pip install -U sentence-transformers ``` 然后使用下面的代码来载入该模型并进行文本表征向量的提取: ```python from sentence_transformers import SentenceTransformer sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"] model = SentenceTransformer('DMetaSoul/sbert-chinese-qmc-domain-v1') embeddings = model.encode(sentences) print(embeddings) ``` ## 2. HuggingFace Transformers 如果不想使用 [sentence-transformers](https://www.SBERT.net) 的话,也可以通过 HuggingFace Transformers 来载入该模型并进行文本向量抽取: ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('DMetaSoul/sbert-chinese-qmc-domain-v1') model = AutoModel.from_pretrained('DMetaSoul/sbert-chinese-qmc-domain-v1') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation 这里主要跟蒸馏前对应的 teacher 模型作了对比 *性能:* | | Teacher | Student | Gap | | ---------- | --------------------- | ------------------- | ----- | | Model | BERT-12-layers (102M) | BERT-4-layers (45M) | 0.44x | | Cost | 23s | 12s | -47% | | Latency | 38ms | 20ms | -47% | | Throughput | 421 sentence/s | 791 sentence/s | 1.9x | *精度:* | | **csts_dev** | **csts_test** | **afqmc** | **lcqmc** | **bqcorpus** | **pawsx** | **xiaobu** | **Avg** | | -------------- | ------------ | ------------- | --------- | --------- | ------------ | --------- | ---------- | ------- | | **Teacher** | 80.90% | 76.62% | 34.51% | 77.05% | 52.95% | 12.97% | 59.47% | 56.35% | | **Student** | 79.89% | 76.34% | 27.59% | 69.26% | 49.40% | 9.06% | 53.52% | 52.15% | | **Gap** (abs.) | - | - | - | - | - | - | - | -4.2% | *基于1万条数据测试,GPU设备是V100,batch_size=16,max_seq_len=256* ## Citing & Authors E-mail: [email protected]
DiegoAlysson/opus-mt-en-ro-finetuned-en-to-ro
[ "pytorch", "tensorboard", "marian", "text2text-generation", "dataset:wmt16", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
**Upside down detector**: Train a model to detect if images are upside down * Trained on Google Street View. * Synthetically turn some of images upside down. Create a training and test set. * Build a neural network using TensorFlow. * Train it to classify image orientation until a reasonable accuracy is reached. * Look at some of the images that were classified incorrectly. Please explain what you might do to improve your model's performance on these images in the future. *The code is taken from: [RotNet](https://github.com/d4nst/RotNet), with minor changes.*
DiegoBalam12/institute_classification
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec_asr_swbd_10_epochs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec_asr_swbd_10_epochs This model is a fine-tuned version of [facebook/wav2vec2-large-robust-ft-swbd-300h](https://huggingface.co/facebook/wav2vec2-large-robust-ft-swbd-300h) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Wer: 0.9627 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:------:|:---------------:|:------:| | 1.0682 | 0.22 | 5000 | 0.7383 | 0.4431 | | 0.9143 | 0.44 | 10000 | 0.7182 | 0.4058 | | 0.8905 | 0.66 | 15000 | 0.6291 | 0.3987 | | 0.8354 | 0.87 | 20000 | 0.5976 | 0.3954 | | 0.7749 | 1.09 | 25000 | 0.5773 | 0.3901 | | 0.7336 | 1.31 | 30000 | 0.5812 | 0.3871 | | 0.7314 | 1.53 | 35000 | 0.5802 | 0.3895 | | 0.0 | 1.75 | 40000 | nan | 0.9627 | | 0.0 | 1.97 | 45000 | nan | 0.9627 | | 0.0 | 2.19 | 50000 | nan | 0.9627 | | 0.0 | 2.4 | 55000 | nan | 0.9627 | | 0.0 | 2.62 | 60000 | nan | 0.9627 | | 0.0 | 2.84 | 65000 | nan | 0.9627 | | 0.0 | 3.06 | 70000 | nan | 0.9627 | | 0.0 | 3.28 | 75000 | nan | 0.9627 | | 0.0 | 3.5 | 80000 | nan | 0.9627 | | 0.0 | 3.72 | 85000 | nan | 0.9627 | | 0.0 | 3.93 | 90000 | nan | 0.9627 | | 0.0 | 4.15 | 95000 | nan | 0.9627 | | 0.0 | 4.37 | 100000 | nan | 0.9627 | | 0.0 | 4.59 | 105000 | nan | 0.9627 | | 0.0 | 4.81 | 110000 | nan | 0.9627 | | 0.0 | 5.03 | 115000 | nan | 0.9627 | | 0.0 | 5.25 | 120000 | nan | 0.9627 | | 0.0 | 5.46 | 125000 | nan | 0.9627 | | 0.0 | 5.68 | 130000 | nan | 0.9627 | | 0.0 | 5.9 | 135000 | nan | 0.9627 | | 0.0 | 6.12 | 140000 | nan | 0.9627 | | 0.0 | 6.34 | 145000 | nan | 0.9627 | | 0.0 | 6.56 | 150000 | nan | 0.9627 | | 0.0 | 6.78 | 155000 | nan | 0.9627 | | 0.0 | 7.0 | 160000 | nan | 0.9627 | | 0.0 | 7.21 | 165000 | nan | 0.9627 | | 0.0 | 7.43 | 170000 | nan | 0.9627 | | 0.0 | 7.65 | 175000 | nan | 0.9627 | | 0.0 | 7.87 | 180000 | nan | 0.9627 | | 0.0 | 8.09 | 185000 | nan | 0.9627 | | 0.0 | 8.31 | 190000 | nan | 0.9627 | | 0.0 | 8.53 | 195000 | nan | 0.9627 | | 0.0 | 8.74 | 200000 | nan | 0.9627 | | 0.0 | 8.96 | 205000 | nan | 0.9627 | | 0.0 | 9.18 | 210000 | nan | 0.9627 | | 0.0 | 9.4 | 215000 | nan | 0.9627 | | 0.0 | 9.62 | 220000 | nan | 0.9627 | | 0.0 | 9.84 | 225000 | nan | 0.9627 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
Digakive/Hsgshs
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - text-classification - PyTorch - Transformers --- # fakeBert This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on a [news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset) from Kaggle. ## Model description Fine-tuning Bert for text classification. ## Training and evaluation data Training & Validation: [Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset) Testing: [Fake News Detection Challenge KDD 2020](https://www.kaggle.com/competitions/fakenewskdd2020/overview) ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-5 - train_batch_size: 16 - eval_batch_size: 16 - optimizer: AdamW
DimaOrekhov/transformer-method-name
[ "pytorch", "encoder-decoder", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: JustAdvanceTechonology/medical_notes_mulitilingual results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # JustAdvanceTechonology/medical_notes_mulitilingual This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 8.7536 - Validation Loss: 6.1397 - Epoch: 7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 1209, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 11.2097 | 6.1454 | 0 | | 8.7069 | 6.1880 | 1 | | 8.7350 | 6.1834 | 2 | | 8.7021 | 6.1364 | 3 | | 8.7385 | 6.2117 | 4 | | 8.7318 | 6.2004 | 5 | | 8.7487 | 6.1531 | 6 | | 8.7536 | 6.1397 | 7 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.5.0 - Datasets 2.0.0 - Tokenizers 0.10.1
Dizoid/Lll
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 --- ## Dataset [NEWS2018 DATASET_04, Task ID: M-EnHi](http://workshop.colips.org/news2018/dataset.html) ## Notebooks - `xmltodict.ipynb` contains the code to convert the `xml` files to `json` for training - `training_script.ipynb` contains the code for training and inference. It is a modified version of https://github.com/AI4Bharat/IndianNLP-Transliteration/blob/master/NoteBooks/Xlit_TrainingSetup_condensed.ipynb ## Predictions `pred_test.json` contains top-10 predictions on the validation set of the dataset ## Evaluation Scores on validation set TOP 10 SCORES FOR 1000 SAMPLES |Metrics | Score | |-----------|-----------| |ACC | 0.703000| |Mean F-score| 0.949289| |MRR | 0.486549| |MAP_ref | 0.381000| TOP 5 SCORES FOR 1000 SAMPLES: |Metrics | Score | |-----------|-----------| |ACC |0.621000| |Mean F-score |0.937985| |MRR |0.475033| |MAP_ref |0.381000| TOP 3 SCORES FOR 1000 SAMPLES: |Metrics | Score | |-----------|-----------| |ACC |0.560000| |Mean F-score |0.927025| |MRR |0.461333| |MAP_ref |0.381000| TOP 2 SCORES FOR 1000 SAMPLES: |Metrics | Score | |-----------|-----------| |ACC | 0.502000| |Mean F-score | 0.913697| |MRR | 0.442000| |MAP_ref | 0.381000| TOP 1 SCORES FOR 1000 SAMPLES: |Metrics | Score | |-----------|-----------| |ACC | 0.382000| |Mean F-score | 0.881272| |MRR | 0.382000| |MAP_ref | 0.380500|
Dkwkk/W
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en thumbnail: http://www.huggingtweets.com/sanjabh/1648901691950/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1484080880222351360/FtDB2j4B_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Lucid Dreams</div> <div style="text-align: center; font-size: 14px;">@sanjabh</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Lucid Dreams. | Data | Lucid Dreams | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 373 | | Short tweets | 137 | | Tweets kept | 2740 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2s7tzf32/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sanjabh's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1cl1cjnx) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1cl1cjnx/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/sanjabh') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Waynehillsdev/waynehills_sentimental_kor
[ "pytorch", "electra", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "ElectraForSequenceClassification" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.5925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 2 | 5.9198 | | No log | 2.0 | 4 | 5.7019 | | No log | 3.0 | 6 | 5.5925 | ### Framework versions - Transformers 4.11.0 - Pytorch 1.10.2+cpu - Datasets 2.0.0 - Tokenizers 0.10.3
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-25
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2208 - Accuracy: 0.924 - F1: 0.9240 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8538 | 1.0 | 250 | 0.3317 | 0.904 | 0.8999 | | 0.2599 | 2.0 | 500 | 0.2208 | 0.924 | 0.9240 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Tokenizers 0.11.6
DoyyingFace/bert-asian-hate-tweets-asonam-clean
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5598704865754364 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8697 - Matthews Correlation: 0.5599 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5223 | 1.0 | 535 | 0.5444 | 0.4309 | | 0.3457 | 2.0 | 1070 | 0.5213 | 0.5021 | | 0.2351 | 3.0 | 1605 | 0.6793 | 0.5234 | | 0.1693 | 4.0 | 2140 | 0.7587 | 0.5527 | | 0.1301 | 5.0 | 2675 | 0.8697 | 0.5599 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
albert-large-v2
[ "pytorch", "tf", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26,792
2022-04-02T21:07:10Z
distilbert-base-uncased trained for 250K steps with batch size 64 on C4, MSMARCO, Wikipedia, S2ORC, News
albert-xlarge-v1
[ "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
341
2022-04-02T21:12:40Z
distilbert-base-uncased trained for 500K steps with batch size 64 on C4, MSMARCO, Wikipedia, S2ORC, News
albert-xlarge-v2
[ "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2,973
2022-04-02T21:15:23Z
distilbert-base-uncased trained for 750K steps with batch size 64 on C4, MSMARCO, Wikipedia, S2ORC, News
albert-xxlarge-v2
[ "pytorch", "tf", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
42,640
null
distilbert-base-uncased trained for 680K steps (lowest loss on dev dataset) with batch size 64 on C4, MSMARCO, Wikipedia, S2ORC, News
bert-base-cased-finetuned-mrpc
[ "pytorch", "tf", "jax", "bert", "fill-mask", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11,644
2022-04-02T21:45:00Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8480392156862745 - name: F1 type: f1 value: 0.89419795221843 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4044 - Accuracy: 0.8480 - F1: 0.8942 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 230 | 0.3830 | 0.8162 | 0.8673 | | No log | 2.0 | 460 | 0.3957 | 0.8456 | 0.8952 | | 0.4307 | 3.0 | 690 | 0.4044 | 0.8480 | 0.8942 | | 0.4307 | 4.0 | 920 | 0.5649 | 0.8407 | 0.8915 | | 0.1739 | 5.0 | 1150 | 0.5983 | 0.8480 | 0.8956 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
bert-base-cased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8,621,271
2022-04-02T22:08:33Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model-index: - name: distilbert-base-uncased-finetuned-stsb results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.8636303639161342 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-stsb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5644 - Pearson: 0.8666 - Spearmanr: 0.8636 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | No log | 1.0 | 360 | 0.6366 | 0.8537 | 0.8516 | | 1.0464 | 2.0 | 720 | 0.6171 | 0.8632 | 0.8626 | | 0.4002 | 3.0 | 1080 | 0.6082 | 0.8663 | 0.8643 | | 0.4002 | 4.0 | 1440 | 0.5644 | 0.8666 | 0.8636 | | 0.2479 | 5.0 | 1800 | 0.5780 | 0.8654 | 0.8624 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
bert-base-german-cased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "de", "transformers", "exbert", "license:mit", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
175,983
2022-04-02T22:29:20Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: canine-s-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.059386434587477076 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # canine-s-finetuned-cola This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6653 - Matthews Correlation: 0.0594 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6132 | 1.0 | 535 | 0.6289 | 0.0 | | 0.6062 | 2.0 | 1070 | 0.6179 | 0.0 | | 0.6122 | 3.0 | 1605 | 0.6160 | 0.0 | | 0.5939 | 4.0 | 2140 | 0.6159 | 0.0 | | 0.5721 | 5.0 | 2675 | 0.6653 | 0.0594 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
bert-base-german-dbmdz-cased
[ "pytorch", "jax", "bert", "fill-mask", "de", "transformers", "license:mit", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,814
2022-04-02T23:02:39Z
--- language: en thumbnail: http://www.huggingtweets.com/clortown-elonmusk-stephencurry30/1648940589601/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1503591435324563456/foUrqiEw_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1488574779351187458/RlIQNUFG_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1484233608793518081/tOID8aXq_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & yeosang elf agenda & Stephen Curry</div> <div style="text-align: center; font-size: 14px;">@clortown-elonmusk-stephencurry30</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Elon Musk & yeosang elf agenda & Stephen Curry. | Data | Elon Musk | yeosang elf agenda | Stephen Curry | | --- | --- | --- | --- | | Tweets downloaded | 221 | 3143 | 3190 | | Retweets | 7 | 541 | 384 | | Short tweets | 62 | 463 | 698 | | Tweets kept | 152 | 2139 | 2108 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2sqcbnn5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @clortown-elonmusk-stephencurry30's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1mq1ftjh) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1mq1ftjh/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/clortown-elonmusk-stephencurry30') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
bert-base-multilingual-uncased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
328,585
null
--- language: - es - qu tags: - quechua - translation - spanish license: apache-2.0 metrics: - bleu - sacrebleu widget: - text: "Dios ama a los hombres" - text: "A pesar de todo, soy feliz" - text: "¿Qué harán allí?" - text: "Debes aprender a respetar" --- # Spanish to Quechua translator This model is a finetuned version of the [t5-small](https://huggingface.co/t5-small). ## Model description t5-small-finetuned-spanish-to-quechua has trained for 46 epochs with 102 747 sentences, the validation was performed with 12 844 sentences and 12 843 sentences were used for the test. ## Intended uses & limitations A large part of the dataset has been extracted from biblical texts, which makes the model perform better with certain types of sentences. ### How to use You can import this model as follows: ```python >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> model_name = 'hackathon-pln-es/t5-small-finetuned-spanish-to-quechua' >>> model = AutoModelForSeq2SeqLM.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` To translate you can do: ```python >>> sentence = "Entonces dijo" >>> input = tokenizer(sentence, return_tensors="pt") >>> output = model.generate(input["input_ids"], max_length=40, num_beams=4, early_stopping=True) >>> print('Original Sentence: {} \nTranslated sentence: {}'.format(sentence, tokenizer.decode(output[0]))) ``` ### Limitations and bias Actually this model only can translate to Quechua of Ayacucho. ## Training data For train this model we use [Spanish to Quechua dataset](https://huggingface.co/datasets/hackathon-pln-es/spanish-to-quechua) ## Evaluation results We obtained the following metrics during the training process: - eval_bleu = 2.9691 - eval_loss = 1.2064628601074219 ## Team members - [Sara Benel](https://huggingface.co/sbenel) - [Jose Vílchez](https://huggingface.co/JCarlos)
bert-large-cased-whole-word-masking
[ "pytorch", "tf", "jax", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2,316
2022-04-03T00:55:24Z
--- license: apache-2.0 --- # -*- coding: utf-8 -*- ''' Original file is located at https://colab.research.google.com/drive/1HrNm5UMZr2Zjmze_HKW799p6LAHM8BTa ''' from google.colab import files files.upload() !pip install kaggle !cp kaggle.json ~/.kaggle/ !chmod 600 ~/.kaggle/kaggle.json !kaggle datasets download 'shaunthesheep/microsoft-catsvsdogs-dataset' !unzip microsoft-catsvsdogs-dataset import tensorflow as tf from tensorflow.keras.preprocessing.image import ImageDataGenerator image_dir='/content/PetImages/Cat' !mkdir train_folder !mkdir test_folder import os path='/content/train_folder/' dir='upside_down' dir2='normal' training_normal= os.path.join(path, dir2) training_upside= os.path.join(path, dir) os.mkdir(training_normal) os.mkdir(training_upside) #creating classes directories path='/content/test_folder/' dir='upside_down' dir2='normal' training_normal= os.path.join(path, dir2) training_upside= os.path.join(path, dir) os.mkdir(training_normal) os.mkdir(training_upside) #copying only the cat images to my train folder fnames = ['{}.jpg'.format(i) for i in range(2000)] for fname in fnames: src = os.path.join('/content/PetImages/Cat', fname) dst = os.path.join('/content/train_folder/normal', fname) shutil.copyfile(src, dst) import os import shutil fnames = ['{}.jpg'.format(i) for i in range(2000, 4000)] for fname in fnames: src = os.path.join('/content/PetImages/Cat', fname) dst = os.path.join('/content/test_folder/normal', fname) shutil.copyfile(src, dst) from scipy import ndimage, misc from PIL import Image import numpy as np import matplotlib.pyplot as plt import imageio import os import cv2 #inverting Training Images outPath = '/content/train_folder/upside_down' path ='/content/train_folder/normal' # iterate through the names of contents of the folder for image_path in os.listdir(path): # create the full input path and read the file input_path = os.path.join(path, image_path) image_to_rotate =plt.imread(input_path) # rotate the image rotated = np.flipud(image_to_rotate) # create full output path, 'example.jpg' # becomes 'rotate_example.jpg', save the file to disk fullpath = os.path.join(outPath, 'rotated_'+image_path) imageio.imwrite(fullpath, rotated) #nverting images for Validation outPath = '/content/test_folder/upside_down' path ='/content/test_folder/normal' # iterate through the names of contents of the folder for image_path in os.listdir(path): # create the full input path and read the file input_path = os.path.join(path, image_path) image_to_rotate =plt.imread(input_path) # rotate the image rotated = np.flipud(image_to_rotate) # create full output path, 'example.jpg' # becomes 'rotate_example.jpg', save the file to disk fullpath = os.path.join(outPath, 'rotated_'+image_path) imageio.imwrite(fullpath, rotated) ima='/content/train_folder/inverted/rotated_1001.jpg' image=plt.imread(ima) plt.imshow(image) # visualize the the figure plt.show() train_dir='/content/train_folder' train_gen=ImageDataGenerator(rescale=1./255) train_images= train_gen.flow_from_directory( train_dir, target_size=(250,250), batch_size=50, class_mode='binary' ) validation_dir='/content/test_folder' test_gen=ImageDataGenerator(rescale=1./255) test_images= test_gen.flow_from_directory( validation_dir, target_size=(250,250), batch_size=50, class_mode='binary' ) model=tf.keras.Sequential([ tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(250,250,3)), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(32, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(128, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(512, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid') ]) from tensorflow.keras.optimizers import RMSprop model.compile(optimizer=RMSprop(learning_rate=0.001), loss=tf.keras.losses.BinaryCrossentropy(), metrics=['acc']) history=model.fit(train_images, validation_data=test_images, epochs=5, steps_per_epoch=40 )
xlm-roberta-large-finetuned-conll03-german
[ "pytorch", "rust", "xlm-roberta", "token-classification", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:1911.02116", "arxiv:1910.09700", "transformers", "autotrain_compatible", "has_space" ]
token-classification
{ "architectures": [ "XLMRobertaForTokenClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3,929
2022-04-03T14:54:33Z
A version of https://huggingface.co/johnowhitaker/orbgan_e1 trained on only dark images
ARATHI/electra-small-discriminator-fintuned-cola
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- ### Dataset used [Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset) ### Labels Fake news: 1 </br> Real news: 0 ### Usage ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig import torch config = AutoConfig.from_pretrained("bhavitvyamalik/fake-news_xtremedistil-l6-h256-uncased") model = AutoModelForSequenceClassification.from_pretrained("bhavitvyamalik/fake-news_xtremedistil-l6-h256-uncased", config=config) tokenizer = AutoTokenizer.from_pretrained("microsoft/xtremedistil-l6-h256-uncased", usefast=True) text = "According to reports by Fox News, Biden is the President of the USA" encode = tokenizer(text, max_length=512, truncation=True, padding="max_length", return_tensors="pt") output = model(**encode) print(torch.argmax(output["logits"])) ``` ### Performance on test data ```json 'test/accuracy': 0.9977836608886719, 'test/aucroc': 0.9999998807907104, 'test/f1': 0.9976308941841125, 'test/loss': 0.00828308891505003 ``` ### Run can be tracked here [Wandb project for Fake news classifier](https://wandb.ai/bhavitvya/Fake%20news%20classifier?workspace=user-bhavitvya)
Ab0/autoencoder-keras-mnist-demo
[ "keras" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2022-04-04T18:59:04Z
--- license: apache-2.0 datasets: - eurosat widget: - src: forest.png example_title: Forest --- # ConvNext fine-tuned on Eurosat This model is a `facebook/convnext-tiny-224` model fine-tuned on the [Eurosat dataset](https://github.com/phelber/EuroSAT).
AdapterHub/bert-base-uncased-pf-hotpotqa
[ "bert", "en", "dataset:hotpot_qa", "arxiv:2104.08247", "adapter-transformers", "question-answering" ]
question-answering
{ "architectures": null, "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: opus-mt-ar-en-finetunedTanzil-v7-ar-to-en results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-ar-en-finetunedTanzil-v7-ar-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1919 - Validation Loss: 0.5047 - Train Rouge1: 49.6877 - Train Rouge2: 25.9574 - Train Rougel: 45.2590 - Train Rougelsum: 45.7464 - Train Gen Len: 85.57 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch | |:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:| | 0.1959 | 0.5105 | 48.2182 | 23.4978 | 44.1127 | 44.6422 | 87.45 | 0 | | 0.1950 | 0.5114 | 49.5777 | 25.1663 | 45.7183 | 46.0930 | 86.72 | 1 | | 0.1937 | 0.5074 | 49.1793 | 24.1899 | 45.3374 | 45.5902 | 84.805 | 2 | | 0.1929 | 0.5075 | 49.1553 | 24.8199 | 44.7342 | 45.1392 | 87.495 | 3 | | 0.1919 | 0.5047 | 49.6877 | 25.9574 | 45.2590 | 45.7464 | 85.57 | 4 | ### Framework versions - Transformers 4.17.0.dev0 - TensorFlow 2.7.0 - Datasets 1.18.4.dev0 - Tokenizers 0.10.3
AkshatSurolia/ConvNeXt-FaceMask-Finetuned
[ "pytorch", "safetensors", "convnext", "image-classification", "dataset:Face-Mask18K", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
image-classification
{ "architectures": [ "ConvNextForImageClassification" ], "model_type": "convnext", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
56
null
--- language: multilingual tags: - emotion - emotion-analysis - multilingual widget: - text: "Guarda! ci sono dei bellissimi capibara!" example_title: "Emotion Classification 1" - text: "Sei una testa di cazzo!!" example_title: "Emotion Classification 2" - text: "Quelle bonne nouvelle!" example_title: "Emotion Classification 3" arxiv: "" --- # [Federico Bianchi](https://federicobianchi.io/) • [Debora Nozza](http://dnozza.github.io/) • [Dirk Hovy](http://www.dirkhovy.com/) ## Abstract Detecting emotion in text allows social and computational scientists to study how people behave and react to online events. However, developing these tools for different languages requires data that is not always available. This paper collects the available emotion detection datasets across 19 languages. We train a multilingual emotion prediction model for social media data, XLM-EMO. The model shows competitive performance in a zero-shot setting, suggesting it is helpful in the context of low-resource languages. We release our model to the community so that interested researchers can directly use it. ## Model This model is the fine-tuned version of the [XLM-T](https://aclanthology.org/2022.lrec-1.27/) model. ### Intended Use The model is intended as a research output for research communities. #### Primary intended uses The primary intended users of these models are AI researchers. ## Results This model had an F1 of 0.85 on the test set. ## License For models, restrictions may apply to the data (which are derived from existing datasets) or Twitter (main data source). We refer users to the original licenses accompanying each dataset and Twitter regulations. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ## Citation Please use the following BibTeX entry if you use this model in your project: ``` @inproceedings{bianchi2021feel, title = "{XLM-EMO: Multilingual Emotion Prediction in Social Media Text}", author = "Bianchi, Federico and Nozza, Debora and Hovy, Dirk", booktitle = "Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", year = "2022", publisher = "Association for Computational Linguistics", } ```
AkshaySg/langid
[ "multilingual", "dataset:VoxLingua107", "speechbrain", "audio-classification", "embeddings", "Language", "Identification", "pytorch", "ECAPA-TDNN", "TDNN", "VoxLingua107", "license:apache-2.0" ]
audio-classification
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- language: en license: apache-2.0 tags: - text-classfication - int8 - Intel® Neural Compressor - QuantizationAwareTraining datasets: - mrpc metrics: - f1 --- # INT8 BERT base uncased finetuned MRPC ### QuantizationAwareTraining This is an INT8 PyTorch model quantized with [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor). The original fp32 model comes from the fine-tuned model [Intel/bert-base-uncased-mrpc](https://huggingface.co/Intel/bert-base-uncased-mrpc). ### Test result | |INT8|FP32| |---|:---:|:---:| | **Accuracy (eval-f1)** |0.9142|0.9042| | **Model size (MB)** |107|418| ### Load with optimum: ```python from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification int8_model = IncQuantizedModelForSequenceClassification( 'Intel/bert-base-uncased-mrpc-int8-qat', ) ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 - train_batch_size: 8 - eval_batch_size: 8 - eval_steps: 100 - load_best_model_at_end: True - metric_for_best_model: f1 - early_stopping_patience = 6 - early_stopping_threshold = 0.001
Al/mymodel
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - null model_index: - name: bert-base-chinese-complaint-128 results: - task: name: Masked Language Modeling type: fill-mask --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-chinese-complaint-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3004 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.3735 | 1.0 | 1250 | 2.4628 | | 2.2412 | 2.0 | 2500 | 2.0378 | | 1.9251 | 3.0 | 3750 | 1.8368 | | 1.7407 | 4.0 | 5000 | 1.6972 | | 1.6137 | 5.0 | 6250 | 1.5937 | | 1.5365 | 6.0 | 7500 | 1.5315 | | 1.4662 | 7.0 | 8750 | 1.4921 | | 1.3985 | 8.0 | 10000 | 1.4517 | | 1.3509 | 9.0 | 11250 | 1.4308 | | 1.3047 | 10.0 | 12500 | 1.3906 | | 1.2745 | 11.0 | 13750 | 1.3467 | | 1.2377 | 12.0 | 15000 | 1.3306 | | 1.2139 | 13.0 | 16250 | 1.3205 | | 1.2027 | 14.0 | 17500 | 1.3098 | | 1.1722 | 15.0 | 18750 | 1.2845 | | 1.1697 | 16.0 | 20000 | 1.3004 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.7.1 - Datasets 1.16.1 - Tokenizers 0.10.3
Alaeddin/convbert-base-turkish-ner-cased
[ "pytorch", "convbert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "ConvBertForTokenClassification" ], "model_type": "convbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - espnet - audio - text-to-speech language: gos --- # Tacotron2 Gronings
AlanDev/test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.925 - name: F1 type: f1 value: 0.9250750482655898 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2236 - Accuracy: 0.925 - F1: 0.9251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8341 | 1.0 | 250 | 0.3329 | 0.8985 | 0.8950 | | 0.2562 | 2.0 | 500 | 0.2236 | 0.925 | 0.9251 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
Aleksandar/bert-srb-base-cased-oscar
[ "pytorch", "bert", "fill-mask", "transformers", "generated_from_trainer", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - audio-classification - generated_from_trainer datasets: - common_language metrics: - accuracy model-index: - name: hubert-base-common-language results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hubert-base-common-language This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the common_language dataset. It achieves the following results on the evaluation set: - Loss: 1.3477 - Accuracy: 0.7317 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 4 - seed: 0 - distributed_type: IPU - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.25 - num_epochs: 10.0 - training precision: Mixed Precision ### Training results ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.0+cpu - Datasets 2.0.0 - Tokenizers 0.11.6
Aleksandar/bert-srb-ner-setimes-lr
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en datasets: - msp-podcast inference: true tags: - speech - audio - wav2vec2 - audio-classification - emotion-recognition license: cc-by-nc-sa-4.0 --- # Model for Dimensional Speech Emotion Recognition based on Wav2vec 2.0 The model expects a raw audio signal as input and outputs predictions for arousal, dominance and valence in a range of approximately 0...1. In addition, it also provides the pooled states of the last transformer layer. The model was created by fine-tuning [ Wav2Vec2-Large-Robust](https://huggingface.co/facebook/wav2vec2-large-robust) on [MSP-Podcast](https://ecs.utdallas.edu/research/researchlabs/msp-lab/MSP-Podcast.html) (v1.7). The model was pruned from 24 to 12 transformer layers before fine-tuning. An [ONNX](https://onnx.ai/") export of the model is available from [doi:10.5281/zenodo.6221127](https://zenodo.org/record/6221127). Further details are given in the associated [paper](https://arxiv.org/abs/2203.07378) and [tutorial](https://github.com/audeering/w2v2-how-to). # Usage ```python import numpy as np import torch import torch.nn as nn from transformers import Wav2Vec2Processor from transformers.models.wav2vec2.modeling_wav2vec2 import ( Wav2Vec2Model, Wav2Vec2PreTrainedModel, ) class RegressionHead(nn.Module): r"""Classification head.""" def __init__(self, config): super().__init__() self.dense = nn.Linear(config.hidden_size, config.hidden_size) self.dropout = nn.Dropout(config.final_dropout) self.out_proj = nn.Linear(config.hidden_size, config.num_labels) def forward(self, features, **kwargs): x = features x = self.dropout(x) x = self.dense(x) x = torch.tanh(x) x = self.dropout(x) x = self.out_proj(x) return x class EmotionModel(Wav2Vec2PreTrainedModel): r"""Speech emotion classifier.""" def __init__(self, config): super().__init__(config) self.config = config self.wav2vec2 = Wav2Vec2Model(config) self.classifier = RegressionHead(config) self.init_weights() def forward( self, input_values, ): outputs = self.wav2vec2(input_values) hidden_states = outputs[0] hidden_states = torch.mean(hidden_states, dim=1) logits = self.classifier(hidden_states) return hidden_states, logits # load model from hub device = 'cpu' model_name = 'audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim' processor = Wav2Vec2Processor.from_pretrained(model_name) model = EmotionModel.from_pretrained(model_name) # dummy signal sampling_rate = 16000 signal = np.zeros((1, sampling_rate), dtype=np.float32) def process_func( x: np.ndarray, sampling_rate: int, embeddings: bool = False, ) -> np.ndarray: r"""Predict emotions or extract embeddings from raw audio signal.""" # run through processor to normalize signal # always returns a batch, so we just get the first entry # then we put it on the device y = processor(x, sampling_rate=sampling_rate) y = y['input_values'][0] y = torch.from_numpy(y).to(device) # run through model with torch.no_grad(): y = model(y)[0 if embeddings else 1] # convert to numpy y = y.detach().cpu().numpy() return y process_func(signal, sampling_rate) # Arousal dominance valence # [[0.5460759 0.6062269 0.4043165]] process_func(signal, sampling_rate, embeddings=True) # Pooled hidden states of last transformer layer # [[-0.00752167 0.0065819 -0.00746339 ... 0.00663631 0.00848747 # 0.00599209]] ```
Aleksandar/electra-srb-ner
[ "pytorch", "safetensors", "electra", "token-classification", "dataset:wikiann", "transformers", "generated_from_trainer", "autotrain_compatible" ]
token-classification
{ "architectures": [ "ElectraForTokenClassification" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
null
--- language: - es tags: - biomedical - clinical - eHR - spanish license: apache-2.0 datasets: - "PlanTL-GOB-ES/pharmaconer" metrics: - f1 model-index: - name: PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer results: - task: type: token-classification dataset: name: pharmaconer type: PlanTL-GOB-ES/pharmaconer metrics: - name: f1 type: f1 value: 0.8913 widget: - text: "Se realizó estudio analítico destacando incremento de niveles de PTH y vitamina D (103,7 pg/ml y 272 ng/ml, respectivamente), atribuidos al exceso de suplementación de vitamina D." - text: " Por el hallazgo de múltiples fracturas por estrés, se procedió a estudio en nuestras consultas, realizándose análisis con función renal, calcio sérico y urinario, calcio iónico, magnesio y PTH, que fueron normales." - text: "Se solicitó una analítica que incluía hemograma, bioquímica, anticuerpos antinucleares (ANA) y serologías, examen de orina, así como biopsia de la lesión. Los resultados fueron normales, con ANA, anti-Sm, anti-RNP, anti-SSA, anti-SSB, anti-Jo1 y anti-Scl70 negativos." --- # Spanish RoBERTa-base biomedical model finetuned for the Named Entity Recognition (NER) task on the PharmaCoNER dataset. ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Evaluation](#evaluation) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description A fine-tuned version of the [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish biomedical corpus known to date, composed of biomedical documents, clinical cases and EHR documents for a total of 1.1B tokens of clean and deduplicated text processed. For more details about the corpora and training, check the _bsc-bio-ehr-es_ model card. ## Intended uses and limitations ## How to use ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training The dataset used is [PharmaCoNER](https://huggingface.co/datasets/PlanTL-GOB-ES/pharmaconer), a NER dataset annotated with substances, compounds and proteins entities. For further information, check the [official website](https://temu.bsc.es/pharmaconer/). ## Evaluation F1 Score: 0.8913 For evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to <[email protected]> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ## Citing information If you use these models, please cite our work: ```bibtext @inproceedings{carrino-etal-2022-pretrained, title = "Pretrained Biomedical Language Models for Clinical {NLP} in {S}panish", author = "Carrino, Casimiro Pio and Llop, Joan and P{\`a}mies, Marc and Guti{\'e}rrez-Fandi{\~n}o, Asier and Armengol-Estap{\'e}, Jordi and Silveira-Ocampo, Joaqu{\'\i}n and Valencia, Alfonso and Gonzalez-Agirre, Aitor and Villegas, Marta", booktitle = "Proceedings of the 21st Workshop on Biomedical Language Processing", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.bionlp-1.19", doi = "10.18653/v1/2022.bionlp-1.19", pages = "193--199", abstract = "This work presents the first large-scale biomedical Spanish language models trained from scratch, using large biomedical corpora consisting of a total of 1.1B tokens and an EHR corpus of 95M tokens. We compared them against general-domain and other domain-specific models for Spanish on three clinical NER tasks. As main results, our models are superior across the NER tasks, rendering them more convenient for clinical NLP applications. Furthermore, our findings indicate that when enough data is available, pre-training from scratch is better than continual pre-training when tested on clinical tasks, raising an exciting research question about which approach is optimal. Our models and fine-tuning scripts are publicly available at HuggingFace and GitHub.", } ``` ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
Aleksandar/electra-srb-oscar
[ "pytorch", "electra", "fill-mask", "transformers", "generated_from_trainer", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "ElectraForMaskedLM" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- language: en thumbnail: http://www.huggingtweets.com/chrismedlandf1-elonmusk-scarbstech/1649253035547/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1503591435324563456/foUrqiEw_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/456005573/scarbs_400x400.JPG&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1252178304192389120/bXT3lbuR_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Craig Scarborough & Chris Medland</div> <div style="text-align: center; font-size: 14px;">@chrismedlandf1-elonmusk-scarbstech</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Elon Musk & Craig Scarborough & Chris Medland. | Data | Elon Musk | Craig Scarborough | Chris Medland | | --- | --- | --- | --- | | Tweets downloaded | 2621 | 3249 | 3250 | | Retweets | 116 | 387 | 196 | | Short tweets | 795 | 646 | 102 | | Tweets kept | 1710 | 2216 | 2952 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3m6vm0tf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chrismedlandf1-elonmusk-scarbstech's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/mnfs00gg) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/mnfs00gg/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/chrismedlandf1-elonmusk-scarbstech') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Aleksandar1932/gpt2-pop
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: mit --- Base model: [roberta-large](https://huggingface.co/roberta-large) Fine tuned for persuadee donation detection on the [Persuasion For Good Dataset](https://gitlab.com/ucdavisnlp/persuasionforgood) (Wang et al., 2019): Given a complete dialogue from Persuasion For Good, the task is to predict the binary label: - 0: the persuadee does not intend to donate - 1: the persuadee intends to donate Only persuadee utterances are input to the model for this task - persuader utterances are discarded. Each training example is the concatenation of all persuadee utterances in a single dialogue, each separated by the `</s>` token. For example: **Input**: `<s>How are you?</s>Can you tell me more about the charity?</s>...</s>Sure, I'll donate a dollar.</s>...</s>` **Label**: 1 **Input**: `<s>How are you?</s>Can you tell me more about the charity?</s>...</s>I am not interested.</s>...</s>` **Label**: 0 The following Dialogues were excluded: - 146 dialogues where a donation of 0 was made at the end of the task but a non-zero amount was pledged by the persuadee in the dialogue, per the following regular expression: `(?:\$(?:0\.)?[1-9]|[1-9][.0-9]*?(?: ?\$| dollars?| cents?))` Data Info: - **Training set**: 587 dialogues, using actual end-task donations as labels - **Validation set**: 141 dialogues, using manual donation intention labels from Persuasion For Good 'AnnSet' - **Test set**: 143 dialogues, using manual donation intention labels from Persuasion For Good 'AnnSet' Training Info: - **Loss**: CrossEntropy with class weights: 1.5447 (class 0) and 0.7393 (class 1). These weights were derived from the training split. - **Early Stopping**: The checkpoint with the highest validation macro f1 was selected. This occurred at step 35 (see training metrics for more detail). Testing Info: - **Test Macro F1**: 0.893 - **Test Accuracy**: 0.902
AlekseyKorshuk/bert
[ "pytorch", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
--- tags: - SpaceInvadersNoFrameskip-v4 --- # **PPO** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **PPO** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Evaluation Results mean_reward=1181.00 +/- 93.8296328459192 ## Usage (with Stable-baselines3) TODO: Add your code --- tags: - SpaceInvadersNoFrameskip-v4 --- # **PPO** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **PPO** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Evaluation Results mean_reward=1190.50 +/- 114.1807777167418 ## Usage (with Stable-baselines3) TODO: Add your code --- tags: - SpaceInvadersNoFrameskip-v4 --- # **PPO** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **PPO** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Evaluation Results mean_reward=1147.50 +/- 39.82775414205526 ## Usage (with Stable-baselines3) TODO: Add your code --- tags: - SpaceInvadersNoFrameskip-v4 --- # **PPO** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **PPO** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Evaluation Results mean_reward=1197.00 +/- 125.76167937809991 ## Usage (with Stable-baselines3) TODO: Add your code --- tags: - SpaceInvadersNoFrameskip-v4 --- # **PPO** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **PPO** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Evaluation Results mean_reward=1261.00 +/- 149.81321704042003 ## Usage (with Stable-baselines3) TODO: Add your code --- tags: - SpaceInvadersNoFrameskip-v4 --- # **PPO** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **PPO** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Evaluation Results mean_reward=1246.00 +/- 128.81770064707723 ## Usage (with Stable-baselines3) TODO: Add your code
AlekseyKorshuk/comedy-scripts
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- language: en license: mit --- # Fairseq-dense 13B - Janeway ## Model Description Fairseq-dense 13B-Janeway is a finetune created using Fairseq's MoE dense model. ## Training data The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is identical as dataset used by GPT-Neo-2.7B-Janeway. Some parts of the dataset have been prepended using the following text: `[Genre: <genre1>,<genre2>]` ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/fairseq-dense-13B-Janeway') >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50) [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}] ``` ### Limitations and Biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). ### BibTeX entry and citation info ``` Artetxe et al. (2021): Efficient Large Scale Language Modeling with Mixtures of Experts ```
AlekseyKorshuk/horror-scripts
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
19
null
--- language: en thumbnail: http://www.huggingtweets.com/chrismedlandf1/1649255880540/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1252178304192389120/bXT3lbuR_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Chris Medland</div> <div style="text-align: center; font-size: 14px;">@chrismedlandf1</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Chris Medland. | Data | Chris Medland | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 196 | | Short tweets | 102 | | Tweets kept | 2952 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2jton7o0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chrismedlandf1's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2qle9s0v) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2qle9s0v/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/chrismedlandf1') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
AlekseyKulnevich/Pegasus-HeaderGeneration
[ "pytorch", "pegasus", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "PegasusForConditionalGeneration" ], "model_type": "pegasus", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - vision - generated_from_trainer - image-segmentation datasets: - segments/sidewalk-semantic model-index: - name: sidewalk-semantic-demo results: [] widget: - src: https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg example_title: Brugge --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sidewalk-semantic-demo This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7591 - Mean Iou: 0.1135 - Mean Accuracy: 0.1608 - Overall Accuracy: 0.6553 - Per Category Iou: [nan, 0.38512238586129177, 0.723869670479682, 3.007496184239216e-05, 0.04329871029371091, 0.0006725029325634934, nan, 0.0, 0.0, 0.0, 0.5420712902837528, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4939727049879936, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.5630706428968278, 0.2911849732223226, 0.5899473333836793, 0.0, 0.0, 1.723395088323998e-05, 0.0] - Per Category Accuracy: [nan, 0.6995968221991989, 0.8870903675336742, 3.007496184239216e-05, 0.043772127605383085, 0.0006731284624713075, nan, 0.0, 0.0, 0.0, 0.8074880705716012, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8257698903048035, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.9746918606102934, 0.3057553223999185, 0.6001142624744604, 0.0, 0.0, 1.7275073149137866e-05, 0.0] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | 2.3589 | 1.0 | 53 | 1.9020 | 0.1014 | 0.1491 | 0.6442 | [0.0, 0.3612513514640175, 0.6751826209974531, 0.0, 0.030376890155720412, 0.0008039971158010613, nan, 2.235273737210043e-05, 0.0, 0.0, 0.5369771616036864, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4924640887729494, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.5705205266526164, 0.07944837262494953, 0.5986634961452602, 0.0, 0.0, 0.00011218284533795612, 0.0] | [nan, 0.523053840654786, 0.9469253318772407, 0.0, 0.030589314463641413, 0.0008054985216698098, nan, 2.2371239534454507e-05, 0.0, 0.0, 0.8528562962514211, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7547252442297603, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.9698553453075568, 0.08054302832748386, 0.6107703679316233, 0.0, 0.0, 0.00011444735961303836, 0.0] | | 2.1214 | 2.0 | 106 | 1.7800 | 0.1158 | 0.1627 | 0.6622 | [nan, 0.3912271306195065, 0.7114203717790301, 0.0001503748092119608, 0.04491329385698775, 0.0008871978593462472, nan, 1.3975654410017748e-06, 0.0, 0.0, 0.5167420849064452, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.49676247687874375, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.5965069148571663, 0.3115535309159788, 0.636016670211685, 0.0, 0.0, 0.0, 0.0] | [nan, 0.6306423988442347, 0.9198450793635351, 0.0001503748092119608, 0.045391490029595895, 0.0008886008009872551, nan, 1.3982024709034067e-06, 0.0, 0.0, 0.8587918189550764, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8103648148965297, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.9600035488335386, 0.3307256120335472, 0.6505175702762634, 0.0, 0.0, 0.0, 0.0] | | 1.9022 | 3.0 | 159 | 1.7591 | 0.1135 | 0.1608 | 0.6553 | [nan, 0.38512238586129177, 0.723869670479682, 3.007496184239216e-05, 0.04329871029371091, 0.0006725029325634934, nan, 0.0, 0.0, 0.0, 0.5420712902837528, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4939727049879936, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.5630706428968278, 0.2911849732223226, 0.5899473333836793, 0.0, 0.0, 1.723395088323998e-05, 0.0] | [nan, 0.6995968221991989, 0.8870903675336742, 3.007496184239216e-05, 0.043772127605383085, 0.0006731284624713075, nan, 0.0, 0.0, 0.0, 0.8074880705716012, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8257698903048035, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.9746918606102934, 0.3057553223999185, 0.6001142624744604, 0.0, 0.0, 1.7275073149137866e-05, 0.0] | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
AlekseyKulnevich/Pegasus-QuestionGeneration
[ "pytorch", "pegasus", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "PegasusForConditionalGeneration" ], "model_type": "pegasus", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
17
null
--- license: mit tags: - generated_from_trainer datasets: - dutch_social metrics: - accuracy - f1 - precision - recall model-index: - name: robbert-twitter-sentiment results: - task: name: Text Classification type: text-classification dataset: name: dutch_social type: dutch_social args: dutch_social metrics: - name: Accuracy type: accuracy value: 0.749 - name: F1 type: f1 value: 0.7491844724992662 - name: Precision type: precision value: 0.7493911755249737 - name: Recall type: recall value: 0.749 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robbert-twitter-sentiment This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on the dutch_social dataset. It achieves the following results on the evaluation set: - Loss: 0.6818 - Accuracy: 0.749 - F1: 0.7492 - Precision: 0.7494 - Recall: 0.749 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.7485 | 1.0 | 188 | 0.7670 | 0.692 | 0.6915 | 0.6920 | 0.692 | | 0.5202 | 2.0 | 376 | 0.6818 | 0.749 | 0.7492 | 0.7494 | 0.749 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cpu - Datasets 2.0.0 - Tokenizers 0.12.0
AlexMaclean/sentence-compression-roberta
[ "pytorch", "roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8651268890789849 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1398 - F1: 0.8651 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2615 | 1.0 | 525 | 0.1515 | 0.8253 | | 0.1285 | 2.0 | 1050 | 0.1423 | 0.8490 | | 0.0803 | 3.0 | 1575 | 0.1398 | 0.8651 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
AlexN/xls-r-300m-fr-0
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: deberta-base-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-base-squad This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 1984 - distributed_type: IPU - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.25 - num_epochs: 2.0 - training precision: Mixed Precision ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cpu - Datasets 2.3.3.dev0 - Tokenizers 0.12.1
Alicanke/Wyau
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - all license: apache-2.0 tags: - fleurs-lang_id - google/xtreme_s - generated_from_trainer datasets: - google/xtreme_s metrics: - accuracy model-index: - name: xtreme_s_xlsr_300m_fleurs_langid results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xtreme_s_xlsr_300m_fleurs_langid This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - FLEURS.ALL dataset. It achieves the following results on the evaluation set: - Accuracy: 0.7271 - Accuracy Af Za: 0.3865 - Accuracy Am Et: 0.8818 - Accuracy Ar Eg: 0.9977 - Accuracy As In: 0.9858 - Accuracy Ast Es: 0.8362 - Accuracy Az Az: 0.8386 - Accuracy Be By: 0.4085 - Accuracy Bn In: 0.9989 - Accuracy Bs Ba: 0.2508 - Accuracy Ca Es: 0.6947 - Accuracy Ceb Ph: 0.9852 - Accuracy Cmn Hans Cn: 0.9799 - Accuracy Cs Cz: 0.5353 - Accuracy Cy Gb: 0.9716 - Accuracy Da Dk: 0.6688 - Accuracy De De: 0.7807 - Accuracy El Gr: 0.7692 - Accuracy En Us: 0.9815 - Accuracy Es 419: 0.9846 - Accuracy Et Ee: 0.5230 - Accuracy Fa Ir: 0.8462 - Accuracy Ff Sn: 0.2348 - Accuracy Fi Fi: 0.9978 - Accuracy Fil Ph: 0.9564 - Accuracy Fr Fr: 0.9852 - Accuracy Ga Ie: 0.8468 - Accuracy Gl Es: 0.5016 - Accuracy Gu In: 0.973 - Accuracy Ha Ng: 0.9163 - Accuracy He Il: 0.8043 - Accuracy Hi In: 0.9354 - Accuracy Hr Hr: 0.3654 - Accuracy Hu Hu: 0.8044 - Accuracy Hy Am: 0.9914 - Accuracy Id Id: 0.9869 - Accuracy Ig Ng: 0.9360 - Accuracy Is Is: 0.0217 - Accuracy It It: 0.8 - Accuracy Ja Jp: 0.7385 - Accuracy Jv Id: 0.5824 - Accuracy Ka Ge: 0.8611 - Accuracy Kam Ke: 0.4184 - Accuracy Kea Cv: 0.8692 - Accuracy Kk Kz: 0.8727 - Accuracy Km Kh: 0.7030 - Accuracy Kn In: 0.9630 - Accuracy Ko Kr: 0.9843 - Accuracy Ku Arab Iq: 0.9577 - Accuracy Ky Kg: 0.8936 - Accuracy Lb Lu: 0.8897 - Accuracy Lg Ug: 0.9253 - Accuracy Ln Cd: 0.9644 - Accuracy Lo La: 0.1580 - Accuracy Lt Lt: 0.4686 - Accuracy Luo Ke: 0.9922 - Accuracy Lv Lv: 0.6498 - Accuracy Mi Nz: 0.9613 - Accuracy Mk Mk: 0.7636 - Accuracy Ml In: 0.6962 - Accuracy Mn Mn: 0.8462 - Accuracy Mr In: 0.3911 - Accuracy Ms My: 0.3632 - Accuracy Mt Mt: 0.6188 - Accuracy My Mm: 0.9705 - Accuracy Nb No: 0.6891 - Accuracy Ne Np: 0.8994 - Accuracy Nl Nl: 0.9093 - Accuracy Nso Za: 0.8873 - Accuracy Ny Mw: 0.4691 - Accuracy Oci Fr: 0.1533 - Accuracy Om Et: 0.9512 - Accuracy Or In: 0.5447 - Accuracy Pa In: 0.8153 - Accuracy Pl Pl: 0.7757 - Accuracy Ps Af: 0.8105 - Accuracy Pt Br: 0.7715 - Accuracy Ro Ro: 0.4122 - Accuracy Ru Ru: 0.9794 - Accuracy Rup Bg: 0.9468 - Accuracy Sd Arab In: 0.5245 - Accuracy Sk Sk: 0.8624 - Accuracy Sl Si: 0.0300 - Accuracy Sn Zw: 0.8843 - Accuracy So So: 0.8803 - Accuracy Sr Rs: 0.0257 - Accuracy Sv Se: 0.0145 - Accuracy Sw Ke: 0.9199 - Accuracy Ta In: 0.9526 - Accuracy Te In: 0.9788 - Accuracy Tg Tj: 0.9883 - Accuracy Th Th: 0.9912 - Accuracy Tr Tr: 0.7887 - Accuracy Uk Ua: 0.0627 - Accuracy Umb Ao: 0.7863 - Accuracy Ur Pk: 0.0134 - Accuracy Uz Uz: 0.4014 - Accuracy Vi Vn: 0.7246 - Accuracy Wo Sn: 0.4555 - Accuracy Xh Za: 1.0 - Accuracy Yo Ng: 0.7353 - Accuracy Yue Hant Hk: 0.7985 - Accuracy Zu Za: 0.4696 - Loss: 1.3789 - Loss Af Za: 2.6778 - Loss Am Et: 0.4615 - Loss Ar Eg: 0.0149 - Loss As In: 0.0764 - Loss Ast Es: 0.4560 - Loss Az Az: 0.5677 - Loss Be By: 1.9231 - Loss Bn In: 0.0024 - Loss Bs Ba: 2.4954 - Loss Ca Es: 1.2632 - Loss Ceb Ph: 0.0426 - Loss Cmn Hans Cn: 0.0650 - Loss Cs Cz: 1.9334 - Loss Cy Gb: 0.1274 - Loss Da Dk: 1.4990 - Loss De De: 0.8820 - Loss El Gr: 0.9839 - Loss En Us: 0.0827 - Loss Es 419: 0.0516 - Loss Et Ee: 1.9264 - Loss Fa Ir: 0.6520 - Loss Ff Sn: 5.4283 - Loss Fi Fi: 0.0109 - Loss Fil Ph: 0.1706 - Loss Fr Fr: 0.0591 - Loss Ga Ie: 0.5174 - Loss Gl Es: 1.2657 - Loss Gu In: 0.0850 - Loss Ha Ng: 0.3234 - Loss He Il: 0.8299 - Loss Hi In: 0.4190 - Loss Hr Hr: 2.9754 - Loss Hu Hu: 0.8345 - Loss Hy Am: 0.0329 - Loss Id Id: 0.0529 - Loss Ig Ng: 0.2523 - Loss Is Is: 6.5153 - Loss It It: 0.8113 - Loss Ja Jp: 1.3968 - Loss Jv Id: 2.0009 - Loss Ka Ge: 0.6162 - Loss Kam Ke: 2.2192 - Loss Kea Cv: 0.5567 - Loss Kk Kz: 0.5592 - Loss Km Kh: 1.7358 - Loss Kn In: 0.1063 - Loss Ko Kr: 0.1519 - Loss Ku Arab Iq: 0.2075 - Loss Ky Kg: 0.4639 - Loss Lb Lu: 0.4454 - Loss Lg Ug: 0.3764 - Loss Ln Cd: 0.1844 - Loss Lo La: 3.8051 - Loss Lt Lt: 2.5054 - Loss Luo Ke: 0.0479 - Loss Lv Lv: 1.3713 - Loss Mi Nz: 0.1390 - Loss Mk Mk: 0.7952 - Loss Ml In: 1.2999 - Loss Mn Mn: 0.7621 - Loss Mr In: 3.7056 - Loss Ms My: 3.0192 - Loss Mt Mt: 1.5520 - Loss My Mm: 0.1514 - Loss Nb No: 1.1194 - Loss Ne Np: 0.4231 - Loss Nl Nl: 0.3291 - Loss Nso Za: 0.5106 - Loss Ny Mw: 2.7346 - Loss Oci Fr: 5.0983 - Loss Om Et: 0.2297 - Loss Or In: 2.5432 - Loss Pa In: 0.7753 - Loss Pl Pl: 0.7309 - Loss Ps Af: 1.0454 - Loss Pt Br: 0.9782 - Loss Ro Ro: 3.5829 - Loss Ru Ru: 0.0598 - Loss Rup Bg: 0.1695 - Loss Sd Arab In: 2.6198 - Loss Sk Sk: 0.5583 - Loss Sl Si: 6.0923 - Loss Sn Zw: 0.4465 - Loss So So: 0.4492 - Loss Sr Rs: 4.7575 - Loss Sv Se: 6.5858 - Loss Sw Ke: 0.4235 - Loss Ta In: 0.1818 - Loss Te In: 0.0808 - Loss Tg Tj: 0.0912 - Loss Th Th: 0.0462 - Loss Tr Tr: 0.7340 - Loss Uk Ua: 4.6777 - Loss Umb Ao: 1.4021 - Loss Ur Pk: 8.4067 - Loss Uz Uz: 4.3297 - Loss Vi Vn: 1.1304 - Loss Wo Sn: 2.2281 - Loss Xh Za: 0.0009 - Loss Yo Ng: 1.3345 - Loss Yue Hant Hk: 1.0728 - Loss Zu Za: 3.7279 - Predict Samples: 77960 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:-----:|:--------:|:---------------:| | 0.5296 | 0.26 | 1000 | 0.4016 | 2.6633 | | 0.4252 | 0.52 | 2000 | 0.5751 | 1.8582 | | 0.2989 | 0.78 | 3000 | 0.6332 | 1.6780 | | 0.3563 | 1.04 | 4000 | 0.6799 | 1.4479 | | 0.1617 | 1.3 | 5000 | 0.6679 | 1.5066 | | 0.1409 | 1.56 | 6000 | 0.6992 | 1.4082 | | 0.01 | 1.82 | 7000 | 0.7071 | 1.2448 | | 0.0018 | 2.08 | 8000 | 0.7148 | 1.1996 | | 0.0014 | 2.34 | 9000 | 0.6410 | 1.6505 | | 0.0188 | 2.6 | 10000 | 0.6840 | 1.4050 | | 0.0007 | 2.86 | 11000 | 0.6621 | 1.5831 | | 0.1038 | 3.12 | 12000 | 0.6829 | 1.5441 | | 0.0003 | 3.38 | 13000 | 0.6900 | 1.3483 | | 0.0004 | 3.64 | 14000 | 0.6414 | 1.7070 | | 0.0003 | 3.9 | 15000 | 0.7075 | 1.3198 | | 0.0002 | 4.16 | 16000 | 0.7105 | 1.3118 | | 0.0001 | 4.42 | 17000 | 0.7029 | 1.4099 | | 0.0 | 4.68 | 18000 | 0.7180 | 1.3658 | | 0.0001 | 4.93 | 19000 | 0.7236 | 1.3514 | ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.1+cu111 - Datasets 1.18.4.dev0 - Tokenizers 0.11.6
Alireza-rw/testbot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: distilbert-base-cased-finetuned-fake-news-detection results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-cased-finetuned-fake-news-detection This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0043 - F1: 0.9996 - Accuracy: 0.9996 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:| | No log | 1.0 | 1684 | 0.0043 | 0.9993 | 0.9993 | | No log | 2.0 | 3368 | 0.0043 | 0.9996 | 0.9996 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
Aliskin/xlm-roberta-base-finetuned-marc
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-04-06T18:40:14Z
--- license: apache-2.0 tags: - automatic-speech-recognition - abdusahmbzuai/arabic_speech_massive_sm - generated_from_trainer model-index: - name: aradia-ctc-distilhubert-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # aradia-ctc-distilhubert-ft This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the ABDUSAHMBZUAI/ARABIC_SPEECH_MASSIVE_SM - NA dataset. It achieves the following results on the evaluation set: - Loss: 2.7114 - Wer: 0.8908 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.43 | 100 | 4.4129 | 1.0 | | No log | 0.87 | 200 | 3.5927 | 1.0 | | No log | 1.3 | 300 | 3.3780 | 1.0 | | No log | 1.74 | 400 | 3.0830 | 1.0 | | 5.3551 | 2.17 | 500 | 2.6278 | 0.9999 | | 5.3551 | 2.61 | 600 | 1.8359 | 1.0000 | | 5.3551 | 3.04 | 700 | 1.7878 | 0.9914 | | 5.3551 | 3.48 | 800 | 1.5219 | 0.9875 | | 5.3551 | 3.91 | 900 | 1.4348 | 0.9879 | | 1.7199 | 4.35 | 1000 | 1.4354 | 0.9644 | | 1.7199 | 4.78 | 1100 | 1.5210 | 0.9519 | | 1.7199 | 5.22 | 1200 | 1.3607 | 0.9475 | | 1.7199 | 5.65 | 1300 | 1.3839 | 0.9343 | | 1.7199 | 6.09 | 1400 | 1.2806 | 0.8944 | | 1.2342 | 6.52 | 1500 | 1.3036 | 0.9011 | | 1.2342 | 6.95 | 1600 | 1.3704 | 0.9072 | | 1.2342 | 7.39 | 1700 | 1.2981 | 0.8891 | | 1.2342 | 7.82 | 1800 | 1.2786 | 0.8733 | | 1.2342 | 8.26 | 1900 | 1.2897 | 0.8867 | | 0.9831 | 8.69 | 2000 | 1.4436 | 0.8780 | | 0.9831 | 9.13 | 2100 | 1.3680 | 0.8873 | | 0.9831 | 9.56 | 2200 | 1.3471 | 0.8692 | | 0.9831 | 10.0 | 2300 | 1.3725 | 0.8729 | | 0.9831 | 10.43 | 2400 | 1.4439 | 0.8771 | | 0.8071 | 10.87 | 2500 | 1.5114 | 0.8928 | | 0.8071 | 11.3 | 2600 | 1.6156 | 0.8958 | | 0.8071 | 11.74 | 2700 | 1.4381 | 0.8749 | | 0.8071 | 12.17 | 2800 | 1.5088 | 0.8717 | | 0.8071 | 12.61 | 2900 | 1.5486 | 0.8813 | | 0.6321 | 13.04 | 3000 | 1.4536 | 0.8884 | | 0.6321 | 13.48 | 3100 | 1.4679 | 0.8947 | | 0.6321 | 13.91 | 3200 | 1.5628 | 0.9117 | | 0.6321 | 14.35 | 3300 | 1.5831 | 0.8716 | | 0.6321 | 14.78 | 3400 | 1.6733 | 0.8702 | | 0.4998 | 15.22 | 3500 | 1.8225 | 0.8665 | | 0.4998 | 15.65 | 3600 | 1.8558 | 0.8732 | | 0.4998 | 16.09 | 3700 | 1.7513 | 0.8766 | | 0.4998 | 16.52 | 3800 | 1.8562 | 0.8753 | | 0.4998 | 16.95 | 3900 | 1.9018 | 0.8704 | | 0.4421 | 17.39 | 4000 | 1.9341 | 0.8789 | | 0.4421 | 17.82 | 4100 | 1.9582 | 0.8781 | | 0.4421 | 18.26 | 4200 | 1.8863 | 0.8821 | | 0.4421 | 18.69 | 4300 | 1.9366 | 0.8847 | | 0.4421 | 19.13 | 4400 | 2.1902 | 0.8721 | | 0.3712 | 19.56 | 4500 | 2.1641 | 0.8670 | | 0.3712 | 20.0 | 4600 | 2.1639 | 0.8776 | | 0.3712 | 20.43 | 4700 | 2.2695 | 0.9030 | | 0.3712 | 20.87 | 4800 | 2.1909 | 0.8937 | | 0.3712 | 21.3 | 4900 | 2.1606 | 0.8959 | | 0.3067 | 21.74 | 5000 | 2.1756 | 0.8943 | | 0.3067 | 22.17 | 5100 | 2.4092 | 0.8773 | | 0.3067 | 22.61 | 5200 | 2.4991 | 0.8721 | | 0.3067 | 23.04 | 5300 | 2.3340 | 0.8910 | | 0.3067 | 23.48 | 5400 | 2.3567 | 0.8946 | | 0.2764 | 23.91 | 5500 | 2.3215 | 0.8897 | | 0.2764 | 24.35 | 5600 | 2.4824 | 0.9002 | | 0.2764 | 24.78 | 5700 | 2.4585 | 0.8963 | | 0.2764 | 25.22 | 5800 | 2.5804 | 0.8879 | | 0.2764 | 25.65 | 5900 | 2.5814 | 0.8903 | | 0.2593 | 26.09 | 6000 | 2.5374 | 0.8868 | | 0.2593 | 26.52 | 6100 | 2.5346 | 0.8922 | | 0.2593 | 26.95 | 6200 | 2.5465 | 0.8873 | | 0.2593 | 27.39 | 6300 | 2.6002 | 0.8919 | | 0.2593 | 27.82 | 6400 | 2.6102 | 0.8928 | | 0.227 | 28.26 | 6500 | 2.6925 | 0.8914 | | 0.227 | 28.69 | 6600 | 2.6981 | 0.8913 | | 0.227 | 29.13 | 6700 | 2.6872 | 0.8891 | | 0.227 | 29.56 | 6800 | 2.7015 | 0.8897 | | 0.227 | 30.0 | 6900 | 2.7114 | 0.8908 | ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
Amitabh/doc-classification
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- extra_gated_prompt: "You agree to not use the model to conduct experiments that cause harm to human subjects." extra_gated_fields: Company: text Country: text I agree to use this model for non-commerical use ONLY: checkbox --- # Temp Model Hello there what is up!
Amro-Kamal/gpt
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - CartPole-v1 --- # **DQN** Agent playing **CartPole-v1** This is a trained model of a **DQN** agent playing **CartPole-v1** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Evaluation Results mean_reward=500.00 +/- 0.0 ## Usage (with Stable-baselines3) TODO: Add your code
Andranik/TestPytorchClassification
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
36
null
--- language: - "List of ISO 639-1 code for your language" - zh widget: - text: "中央疫情指揮中心臨時記者會宣布全院區為紅區,擴大隔離,但鄭文燦早在七十二小時前就主張,只要是先前在桃園醫院住院、轉院的患者與陪病家屬,都要居家隔離" example_title: "範例ㄧ" - text: "台東地檢署21日指揮警方前往張靜的事務所及黃姓女友所經營的按摩店進行搜索" example_title: "範例二" - text: "各地停電事件頻傳,即便經濟部與台電均否認「台灣缺電」,但也難消國人的疑慮。" example_title: "範例三" --- --- license: gpl-3.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: albert-base-chinese-0407-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-chinese-0407-ner This model is a fine-tuned version of [ckiplab/albert-base-chinese](https://huggingface.co/ckiplab/albert-base-chinese) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0948 - Precision: 0.8603 - Recall: 0.8871 - F1: 0.8735 - Accuracy: 0.9704 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 1.3484 | 0.05 | 500 | 0.5395 | 0.1841 | 0.1976 | 0.1906 | 0.8465 | | 0.3948 | 0.09 | 1000 | 0.2910 | 0.6138 | 0.7113 | 0.6590 | 0.9263 | | 0.2388 | 0.14 | 1500 | 0.2030 | 0.6628 | 0.7797 | 0.7165 | 0.9414 | | 0.1864 | 0.18 | 2000 | 0.1729 | 0.7490 | 0.7935 | 0.7706 | 0.9498 | | 0.1754 | 0.23 | 2500 | 0.1641 | 0.7415 | 0.7869 | 0.7635 | 0.9505 | | 0.1558 | 0.28 | 3000 | 0.1532 | 0.7680 | 0.8002 | 0.7838 | 0.9530 | | 0.1497 | 0.32 | 3500 | 0.1424 | 0.7865 | 0.8282 | 0.8068 | 0.9555 | | 0.1488 | 0.37 | 4000 | 0.1373 | 0.7887 | 0.8111 | 0.7997 | 0.9553 | | 0.1361 | 0.42 | 4500 | 0.1311 | 0.7942 | 0.8382 | 0.8156 | 0.9590 | | 0.1335 | 0.46 | 5000 | 0.1264 | 0.7948 | 0.8423 | 0.8179 | 0.9596 | | 0.1296 | 0.51 | 5500 | 0.1242 | 0.8129 | 0.8416 | 0.8270 | 0.9603 | | 0.1338 | 0.55 | 6000 | 0.1315 | 0.7910 | 0.8588 | 0.8235 | 0.9586 | | 0.1267 | 0.6 | 6500 | 0.1193 | 0.8092 | 0.8399 | 0.8243 | 0.9609 | | 0.1207 | 0.65 | 7000 | 0.1205 | 0.8021 | 0.8469 | 0.8239 | 0.9601 | | 0.1214 | 0.69 | 7500 | 0.1201 | 0.7969 | 0.8489 | 0.8220 | 0.9605 | | 0.1168 | 0.74 | 8000 | 0.1134 | 0.8087 | 0.8607 | 0.8339 | 0.9620 | | 0.1162 | 0.78 | 8500 | 0.1127 | 0.8177 | 0.8492 | 0.8331 | 0.9625 | | 0.1202 | 0.83 | 9000 | 0.1283 | 0.7986 | 0.8550 | 0.8259 | 0.9580 | | 0.1135 | 0.88 | 9500 | 0.1101 | 0.8213 | 0.8572 | 0.8389 | 0.9638 | | 0.1121 | 0.92 | 10000 | 0.1097 | 0.8190 | 0.8588 | 0.8384 | 0.9635 | | 0.1091 | 0.97 | 10500 | 0.1088 | 0.8180 | 0.8521 | 0.8347 | 0.9632 | | 0.1058 | 1.02 | 11000 | 0.1085 | 0.8136 | 0.8716 | 0.8416 | 0.9630 | | 0.0919 | 1.06 | 11500 | 0.1079 | 0.8309 | 0.8566 | 0.8436 | 0.9646 | | 0.0914 | 1.11 | 12000 | 0.1079 | 0.8423 | 0.8542 | 0.8482 | 0.9656 | | 0.0921 | 1.15 | 12500 | 0.1109 | 0.8312 | 0.8647 | 0.8476 | 0.9646 | | 0.0926 | 1.2 | 13000 | 0.1240 | 0.8413 | 0.8488 | 0.8451 | 0.9637 | | 0.0914 | 1.25 | 13500 | 0.1040 | 0.8336 | 0.8666 | 0.8498 | 0.9652 | | 0.0917 | 1.29 | 14000 | 0.1032 | 0.8352 | 0.8707 | 0.8526 | 0.9662 | | 0.0928 | 1.34 | 14500 | 0.1052 | 0.8347 | 0.8656 | 0.8498 | 0.9651 | | 0.0906 | 1.38 | 15000 | 0.1032 | 0.8399 | 0.8619 | 0.8507 | 0.9662 | | 0.0903 | 1.43 | 15500 | 0.1074 | 0.8180 | 0.8708 | 0.8436 | 0.9651 | | 0.0889 | 1.48 | 16000 | 0.0990 | 0.8367 | 0.8713 | 0.8537 | 0.9670 | | 0.0914 | 1.52 | 16500 | 0.1055 | 0.8508 | 0.8506 | 0.8507 | 0.9661 | | 0.0934 | 1.57 | 17000 | 0.0979 | 0.8326 | 0.8740 | 0.8528 | 0.9669 | | 0.0898 | 1.62 | 17500 | 0.1022 | 0.8393 | 0.8615 | 0.8502 | 0.9668 | | 0.0869 | 1.66 | 18000 | 0.0962 | 0.8484 | 0.8762 | 0.8621 | 0.9682 | | 0.089 | 1.71 | 18500 | 0.1008 | 0.8447 | 0.8714 | 0.8579 | 0.9674 | | 0.0927 | 1.75 | 19000 | 0.0986 | 0.8379 | 0.8749 | 0.8560 | 0.9673 | | 0.0883 | 1.8 | 19500 | 0.0965 | 0.8518 | 0.8749 | 0.8632 | 0.9688 | | 0.0965 | 1.85 | 20000 | 0.0937 | 0.8412 | 0.8766 | 0.8585 | 0.9682 | | 0.0834 | 1.89 | 20500 | 0.0920 | 0.8451 | 0.8862 | 0.8652 | 0.9687 | | 0.0817 | 1.94 | 21000 | 0.0943 | 0.8439 | 0.8800 | 0.8616 | 0.9686 | | 0.088 | 1.99 | 21500 | 0.0927 | 0.8483 | 0.8762 | 0.8620 | 0.9683 | | 0.0705 | 2.03 | 22000 | 0.0993 | 0.8525 | 0.8783 | 0.8652 | 0.9690 | | 0.0709 | 2.08 | 22500 | 0.0976 | 0.8610 | 0.8697 | 0.8653 | 0.9689 | | 0.0655 | 2.12 | 23000 | 0.0997 | 0.8585 | 0.8665 | 0.8625 | 0.9683 | | 0.0656 | 2.17 | 23500 | 0.0966 | 0.8569 | 0.8822 | 0.8694 | 0.9695 | | 0.0698 | 2.22 | 24000 | 0.0955 | 0.8604 | 0.8775 | 0.8689 | 0.9696 | | 0.065 | 2.26 | 24500 | 0.0971 | 0.8614 | 0.8780 | 0.8696 | 0.9697 | | 0.0653 | 2.31 | 25000 | 0.0959 | 0.8600 | 0.8787 | 0.8692 | 0.9698 | | 0.0685 | 2.35 | 25500 | 0.1001 | 0.8610 | 0.8710 | 0.8659 | 0.9690 | | 0.0684 | 2.4 | 26000 | 0.0969 | 0.8490 | 0.8877 | 0.8679 | 0.9690 | | 0.0657 | 2.45 | 26500 | 0.0954 | 0.8532 | 0.8832 | 0.8680 | 0.9696 | | 0.0668 | 2.49 | 27000 | 0.0947 | 0.8604 | 0.8793 | 0.8698 | 0.9695 | | 0.0644 | 2.54 | 27500 | 0.0989 | 0.8527 | 0.8790 | 0.8656 | 0.9696 | | 0.0685 | 2.59 | 28000 | 0.0955 | 0.8596 | 0.8772 | 0.8683 | 0.9700 | | 0.0702 | 2.63 | 28500 | 0.0937 | 0.8585 | 0.8837 | 0.8709 | 0.9700 | | 0.0644 | 2.68 | 29000 | 0.0946 | 0.8605 | 0.8830 | 0.8716 | 0.9702 | | 0.065 | 2.72 | 29500 | 0.0953 | 0.8617 | 0.8822 | 0.8719 | 0.9701 | | 0.063 | 2.77 | 30000 | 0.0943 | 0.8597 | 0.8848 | 0.8721 | 0.9701 | | 0.0638 | 2.82 | 30500 | 0.0941 | 0.8619 | 0.8846 | 0.8731 | 0.9702 | | 0.066 | 2.86 | 31000 | 0.0942 | 0.8608 | 0.8847 | 0.8726 | 0.9701 | | 0.0589 | 2.91 | 31500 | 0.0952 | 0.8632 | 0.8836 | 0.8733 | 0.9704 | | 0.0568 | 2.95 | 32000 | 0.0948 | 0.8603 | 0.8871 | 0.8735 | 0.9704 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
Andranik/TestQA2
[ "pytorch", "electra", "question-answering", "transformers", "generated_from_trainer", "autotrain_compatible" ]
question-answering
{ "architectures": [ "ElectraForQuestionAnswering" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-MIR_ST500-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-MIR_ST500-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7360 - Wer: 0.9837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 101.0917 | 16.67 | 100 | 18.8979 | 0.8208 | | 15.5054 | 33.33 | 200 | 10.9184 | 0.8208 | | 10.1879 | 50.0 | 300 | 7.6480 | 0.8208 | | 6.777 | 66.67 | 400 | 3.5386 | 1.0 | | 3.0546 | 83.33 | 500 | 2.8794 | 1.0 | | 2.8661 | 100.0 | 600 | 2.8405 | 1.0 | | 2.847 | 116.67 | 700 | 2.8554 | 1.0 | | 2.7661 | 133.33 | 800 | 2.6343 | 1.0 | | 2.3474 | 150.0 | 900 | 2.7464 | 1.0 | | 2.2464 | 166.67 | 1000 | 2.3565 | 1.0 | | 2.207 | 183.33 | 1100 | 2.8854 | 1.0 | | 2.3138 | 200.0 | 1200 | 2.5868 | 1.0 | | 2.259 | 216.67 | 1300 | 2.6530 | 1.0 | | 2.1667 | 233.33 | 1400 | 2.4921 | 1.0 | | 2.1268 | 250.0 | 1500 | 2.5435 | 1.0 | | 2.1089 | 266.67 | 1600 | 2.5444 | 1.0 | | 2.0845 | 283.33 | 1700 | 2.6796 | 1.0 | | 2.0672 | 300.0 | 1800 | 2.5824 | 1.0 | | 2.055 | 316.67 | 1900 | 2.4631 | 1.0 | | 2.0317 | 333.33 | 2000 | 2.5751 | 1.0 | | 2.0141 | 350.0 | 2100 | 2.5627 | 1.0 | | 1.9914 | 366.67 | 2200 | 2.6132 | 1.0 | | 1.9489 | 383.33 | 2300 | 2.7527 | 1.0 | | 1.9146 | 400.0 | 2400 | 2.6121 | 0.9935 | | 1.893 | 416.67 | 2500 | 2.7110 | 0.9902 | | 1.845 | 433.33 | 2600 | 2.7410 | 0.9967 | | 1.8095 | 450.0 | 2700 | 2.7013 | 0.9935 | | 1.7708 | 466.67 | 2800 | 2.7719 | 0.9935 | | 1.7224 | 483.33 | 2900 | 2.7740 | 0.9837 | | 1.6961 | 500.0 | 3000 | 2.7360 | 0.9837 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
AnonymousSub/SR_rule_based_hier_quadruplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: anomaly2 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 1.0 --- # anomaly2 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### abnormal ![abnormal](images/abnormal) #### normal ![normal](images/normal.tif)
AnonymousSub/SR_rule_based_roberta_bert_quadruplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0002 - Accuracy: 0.8923 - F1: 0.9167 - Precision: 0.8462 - Recall: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.0026 | 1.0 | 1956 | 0.0003 | 0.9552 | 0.9636 | 0.9298 | 1.0 | | 0.0015 | 2.0 | 3912 | 0.0003 | 0.6688 | 0.7815 | 0.6416 | 0.9996 | | 0.0011 | 3.0 | 5868 | 0.0002 | 0.8923 | 0.9167 | 0.8462 | 1.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
AnonymousSub/SR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9299878143347735 - name: Recall type: recall value: 0.9391430808815304 - name: F1 type: f1 value: 0.93454302571524 - name: Accuracy type: accuracy value: 0.9841453921553053 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0635 - Precision: 0.9300 - Recall: 0.9391 - F1: 0.9345 - Accuracy: 0.9841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0886 | 1.0 | 1756 | 0.0676 | 0.9198 | 0.9233 | 0.9215 | 0.9809 | | 0.0382 | 2.0 | 3512 | 0.0605 | 0.9271 | 0.9360 | 0.9315 | 0.9836 | | 0.0247 | 3.0 | 5268 | 0.0635 | 0.9300 | 0.9391 | 0.9345 | 0.9841 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.9.0 - Datasets 2.0.0 - Tokenizers 0.11.6
AnonymousSub/SR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: - en tags: - text-classification - fake-news - pytorch datasets: - Fake News https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset metrics: - Accuracy, AUC --- ## Model description: [Distilbert](https://arxiv.org/abs/1910.01108) is created with knowledge distillation during the pre-training phase which reduces the size of a BERT model by 40%, while retaining 97% of its language understanding. It's smaller, faster than Bert and any other Bert-based model. [Distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) finetuned on the fake news dataset with below Hyperparameters ``` learning rate 5e-5, batch size 32, num_train_epochs=2, ``` Full code available @ [DistilBert-FakeNews](https://github.com/anasserhussien/DistilBert-FakeNews) Dataset available @ [Fake News dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
AnonymousSub/SR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4718 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.707 | 1.0 | 157 | 2.4883 | | 2.572 | 2.0 | 314 | 2.4240 | | 2.5377 | 3.0 | 471 | 2.4355 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
AnonymousSub/SR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: mit tags: - generated_from_trainer datasets: - dutch_social metrics: - accuracy - f1 - precision - recall model-index: - name: robbert-twitter-sentiment-custom results: - task: name: Text Classification type: text-classification dataset: name: dutch_social type: dutch_social args: dutch_social metrics: - name: Accuracy type: accuracy value: 0.788 - name: F1 type: f1 value: 0.7878005279207152 - name: Precision type: precision value: 0.7877102066609215 - name: Recall type: recall value: 0.788 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robbert-twitter-sentiment-custom This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on the dutch_social dataset. It achieves the following results on the evaluation set: - Loss: 0.6656 - Accuracy: 0.788 - F1: 0.7878 - Precision: 0.7877 - Recall: 0.788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.8287 | 1.0 | 282 | 0.7178 | 0.7007 | 0.6958 | 0.6973 | 0.7007 | | 0.4339 | 2.0 | 564 | 0.5873 | 0.7667 | 0.7668 | 0.7681 | 0.7667 | | 0.2045 | 3.0 | 846 | 0.6656 | 0.788 | 0.7878 | 0.7877 | 0.788 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cpu - Datasets 2.0.0 - Tokenizers 0.11.6
AnonymousSub/SR_rule_based_twostage_quadruplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: cc-by-3.0 --- Architecture: Resnet-18 with two modifications. 1. 1 channel Conv2D as the first layer. 2. 2-way output on FC layer. Training procedure: 1. Pre-trained in ImageNet. 2. Further training on FashionMNIST. 3. Final training on the task of predicting if Fashion-MNIST images are flipped vertically or not.
AnonymousSub/SR_rule_based_twostagequadruplet_hier_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.919 - name: F1 type: f1 value: 0.9190903538852266 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2225 - Accuracy: 0.919 - F1: 0.9191 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.814 | 1.0 | 250 | 0.3153 | 0.904 | 0.9016 | | 0.2515 | 2.0 | 500 | 0.2225 | 0.919 | 0.9191 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu116 - Datasets 2.6.1 - Tokenizers 0.13.1
AnonymousSub/bert-base-uncased_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- language: en thumbnail: http://www.huggingtweets.com/timjdillon/1649358240896/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1010263656456744960/bXOUw0hb_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Tim Dillon</div> <div style="text-align: center; font-size: 14px;">@timjdillon</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Tim Dillon. | Data | Tim Dillon | | --- | --- | | Tweets downloaded | 3240 | | Retweets | 658 | | Short tweets | 293 | | Tweets kept | 2289 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1egbnexm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @timjdillon's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1yr18emq) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1yr18emq/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/timjdillon') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
AnonymousSub/bert_hier_diff_equal_wts_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-german-cased-finetuned-subj_v5_7Epoch results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-german-cased-finetuned-subj_v5_7Epoch This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3036 - Precision: 0.7983 - Recall: 0.7781 - F1: 0.7881 - Accuracy: 0.9073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 32 | 0.3438 | 0.6970 | 0.7107 | 0.7038 | 0.8626 | | No log | 2.0 | 64 | 0.2747 | 0.7688 | 0.7472 | 0.7578 | 0.8902 | | No log | 3.0 | 96 | 0.2683 | 0.7827 | 0.7893 | 0.7860 | 0.8981 | | No log | 4.0 | 128 | 0.2768 | 0.8024 | 0.7528 | 0.7768 | 0.9027 | | No log | 5.0 | 160 | 0.2881 | 0.8102 | 0.7556 | 0.7820 | 0.9060 | | No log | 6.0 | 192 | 0.3006 | 0.7959 | 0.7669 | 0.7811 | 0.9040 | | No log | 7.0 | 224 | 0.3036 | 0.7983 | 0.7781 | 0.7881 | 0.9073 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
AnonymousSub/bert_mean_diff_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: mit tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: ACTS_feedback1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ACTS_feedback1 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2357 - Accuracy: 0.8936 - Balanced accuracy: 0.8897 - Precision: 0.8951 - Recall: 0.8936 - F1: 0.8915 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Balanced accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------------:|:---------:|:------:|:------:| | 1.0881 | 1.0 | 12 | 1.0513 | 0.5532 | 0.5119 | 0.4004 | 0.5532 | 0.4645 | | 0.9933 | 2.0 | 24 | 0.9257 | 0.5319 | 0.4952 | 0.3852 | 0.5319 | 0.4463 | | 0.8065 | 3.0 | 36 | 0.7059 | 0.7234 | 0.7295 | 0.7607 | 0.7234 | 0.7184 | | 0.5504 | 4.0 | 48 | 0.4259 | 0.8511 | 0.8474 | 0.8486 | 0.8511 | 0.8472 | | 0.3262 | 5.0 | 60 | 0.3703 | 0.8511 | 0.8654 | 0.8624 | 0.8511 | 0.8499 | | 0.1877 | 6.0 | 72 | 0.2518 | 0.8723 | 0.8731 | 0.8719 | 0.8723 | 0.8703 | | 0.1094 | 7.0 | 84 | 0.2283 | 0.9362 | 0.9410 | 0.9415 | 0.9362 | 0.9365 | | 0.0721 | 8.0 | 96 | 0.2246 | 0.9149 | 0.9244 | 0.9233 | 0.9149 | 0.9149 | | 0.0521 | 9.0 | 108 | 0.2215 | 0.8936 | 0.8897 | 0.8951 | 0.8936 | 0.8915 | | 0.0455 | 10.0 | 120 | 0.2357 | 0.8936 | 0.8897 | 0.8951 | 0.8936 | 0.8915 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
AnonymousSub/bert_mean_diff_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
# Description This model is Part of the NLP assignment for Fatima Fellowship. This model is a fine-tuned version of 'bert-base-uncased' on the below dataset: [Fake News Dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset). It achieves the following results on the evaluation set: - Accuracy: 0.995 - Precision: 0.995 - Recall: 0.995 - F_score: 0.995 # Labels Fake news: 0 Real news: 1 # Using this model in your code To use this model, first download it from the hugging face website: ```python import transformers from transformers import AutoTokenizer class Fake_Real_Model_Arch_test(transformers.PreTrainedModel): def __init__(self,bert): super(Fake_Real_Model_Arch_test,self).__init__(config=AutoConfig.from_pretrained(MODEL_NAME)) self.bert = bert num_classes = 2 # number of targets to predict embedding_dim = 768 # length of embedding dim self.fc1 = nn.Linear(embedding_dim, num_classes) self.softmax = nn.Softmax() def forward(self, text_id, text_mask): outputs= self.bert(text_id, attention_mask=text_mask) outputs = outputs[1] # get hidden layers logit = self.fc1(outputs) return self.softmax(logit) tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") model = Fake_Real_Model_Arch_test(AutoModel.from_pretrained("rematchka/Bert_fake_news_detection")) ```
AnonymousSub/bert_snips
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1503591435324563456/foUrqiEw_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1010263656456744960/bXOUw0hb_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1468306462245994496/x8koB4rb_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Tim Dillon & mark normand</div> <div style="text-align: center; font-size: 14px;">@elonmusk-marknorm-timjdillon</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Elon Musk & Tim Dillon & mark normand. | Data | Elon Musk | Tim Dillon | mark normand | | --- | --- | --- | --- | | Tweets downloaded | 400 | 3240 | 3202 | | Retweets | 14 | 658 | 116 | | Short tweets | 117 | 293 | 477 | | Tweets kept | 269 | 2289 | 2609 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/yk5i85xt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-marknorm-timjdillon's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zuzgzjdk) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zuzgzjdk/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/elonmusk-marknorm-timjdillon') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
AnonymousSub/cline-emanuals-s10-AR
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
# DistilBERT with 256k token embeddings This model was initialized with a word2vec token embedding matrix with 256k entries, but these token embeddings were updated during MLM. The word2vec was trained on 100GB data from C4, MSMARCO, News, Wikipedia, S2ORC, for 3 epochs. Then the model was trained on this dataset with MLM for 1M steps (batch size 64). The token embeddings were updated during MLM.
AnonymousSub/cline-emanuals-s10-SR
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en tags: - pytorch - causal-lm license: apache-2.0 datasets: - the_pile --- GPT-NeoX-20B is a 20 billion parameter autoregressive language model trained on [the Pile](https://pile.eleuther.ai/) using the [GPT-NeoX library](https://github.com/EleutherAI/gpt-neox). Its architecture intentionally resembles that of GPT-3, and is almost identical to that of [GPT-J- 6B](https://huggingface.co/EleutherAI/gpt-j-6B). Its training dataset contains a multitude of English-language texts, reflecting the general-purpose nature of this model. See the [accompanying paper](https://arxiv.org/abs/2204.06745) for details about model architecture (including how it differs from GPT-3), training procedure, and additional evaluations. ### Model details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745). For details about the training dataset, see [the Pile paper](https://arxiv.org/abs/2101.00027), and [its data sheet](https://arxiv.org/abs/2201.07311). - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing GPT-NeoX-20B documentation before asking about the model on Discord. For general correspondence: [contact@eleuther. ai](mailto:[email protected]). <figure style="width:30em"> | Hyperparameter | Value | | ---------------------- | ----------- | | n<sub>parameters</sub> | 20554567680 | | n<sub>layers</sub> | 44 | | d<sub>model</sub> | 6144 | | n<sub>heads</sub> | 64 | | d<sub>head</sub> | 96 | | n<sub>vocab</sub> | 50257 | | Sequence Length | 2048 | | Learning Rate | 0.97 x 10<sup>-5</sup> | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | </figure> ### Uses and limitations #### Intended use GPT-NeoX-20B was developed primarily for research purposes. It learns an inner representation of the English language that can be used to extract features useful for downstream tasks. In addition to scientific uses, you may also further fine-tune and adapt GPT-NeoX-20B for deployment, as long as your use is in accordance with the Apache 2.0 license. This model works with the [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained GPT-NeoX-20B as a basis for your fine-tuned model, please note that you need to conduct your own risk and bias assessment. #### Out-of-scope use GPT-NeoX-20B is **not** intended for deployment as-is. It is not a product and cannot be used for human-facing interactions without supervision. GPT-NeoX-20B has not been fine-tuned for downstream tasks for which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means GPT-NeoX-20B will likely **not** respond to a given prompt the way products such as ChatGPT do. This is because, unlike GPT-NeoX-20B, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “understand” human instructions and dialogue. This model is English-language only, and thus cannot be used for translation or generating text in other languages. #### Limitations and biases The core functionality of GPT-NeoX-20B is to take a string of text and predict the next token. Remember that the statistically most likely next token need not result in the most “accurate” text. Never rely on GPT-NeoX-20B to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. GPT-NeoX-20B may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. We recommend curating the outputs of this model before presenting it to a human reader. Please inform your audience that you are using artificially generated text. #### How to use If you simply want to try out some prompts, check out [this playground](https://20b.eleuther.ai/). GPT-NeoX-20B can be loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b") ``` ### Training #### Training dataset The Pile is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). The Pile was **not** deduplicated before being used to train GPT-NeoX-20B. #### Training procedure GPT-NeoX-20B was trained with a batch size of approximately 3.15M tokens (1538 sequences of 2048 tokens each), for a total of 150,000 steps. Tensor parallelism and pipeline parallelism were used to distribute the model across GPUs. Additional details about the training procedure are in [Section 3 of the accompanying paper](https://arxiv.org/abs/2204.06745). ### Evaluations <figure style="width:55em"> | Model | OpenAI’s LAMBADA | SciQ | PIQA | TriviaQA | ARC (Challenge) | | ------------- | :--------------: | :-----------: | :-----------: | :-----------: | :-------------: | | GPT-J-6B | 0.683 ± 0.006 | 0.910 ± 0.009 | 0.752 ± 0.010 | 0.170 ± 0.004 | 0.340 ± 0.014 | | FairSeq 6.7B | 0.673 ± 0.007 | 0.895 ± 0.010 | 0.762 ± 0.010 | 0.221 ± 0.004 | 0.329 ± 0.014 | | GPT-3 Curie | 0.693 ± 0.006 | 0.918 ± 0.009 | 0.767 ± 0.010 | 0.196 ± 0.004 | 0.334 ± 0.014 | | FairSeq 13B | 0.709 ± 0.006 | 0.910 ± 0.009 | 0.769 ± 0.010 | 0.270 ± 0.004 | 0.345 ± 0.014 | | GPT-NeoX-20B | 0.720 ± 0.006 | 0.928 ± 0.008 | 0.779 ± 0.010 | 0.259 ± 0.004 | 0.380 ± 0.014 | | GPT-3 DaVinci | 0.752 ± 0.006 | 0.949 ± 0.007 | 0.791 ± 0.009 | 0.409 ± 0.005 | 0.435 ± 0.014 | <figcaption>Zero-shot performance on selected natural language tasks.</figcaption> </figure> This is a heavily abridged version of the evaluation results. Appendix D of the [GPT-NeoX-20B paper](https://arxiv.org/abs/2204.06745) compares more model sizes, and contains additional evaluations, including on: zero and five-shot natural language tasks, zero and five-shot Basic Arithmetic and MATH, and zero-shot Hendrycks tasks. ### BibTeX To cite the GPT-NeoX-20B paper: ``` @misc{https://doi.org/10.48550/arxiv.2204.06745, doi = {10.48550/ARXIV.2204.06745}, url = {https://arxiv.org/abs/2204.06745}, author = {Black, Sid and Biderman, Stella and Hallahan, Eric and Anthony, Quentin and Gao, Leo and Golding, Laurence and He, Horace and Leahy, Connor and McDonell, Kyle and Phang, Jason and Pieler, Michael and Prashanth, USVSN Sai and Purohit, Shivanshu and Reynolds, Laria and Tow, Jonathan and Wang, Ben and Weinbach, Samuel}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {GPT-NeoX-20B: An Open-Source Autoregressive Language Model}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
AnonymousSub/cline-papers-roberta-0.585
[ "pytorch", "roberta", "transformers" ]
null
{ "architectures": [ "LecbertForPreTraining" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8575809199318569 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1319 - F1: 0.8576 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3264 | 1.0 | 197 | 0.1623 | 0.8139 | | 0.136 | 2.0 | 394 | 0.1331 | 0.8451 | | 0.096 | 3.0 | 591 | 0.1319 | 0.8576 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
AnonymousSub/cline-s10-AR
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
2022-04-07T21:18:28Z
--- tags: - conversational --- # My Awesome Model of Eva
AnonymousSub/declutr-emanuals-s10-AR
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
python run_squad.py --model_name_or_path google/canine-s --do_train --do_eval --per_gpu_train_batch_size 1 --per_gpu_eval_batch_size 1 --gradient_accumulation_steps 128 --learning_rate 3e-5 --num_train_epochs 3 --max_seq_length 1024 --doc_stride 128 --max_answer_length 240 --output_dir canine-s-squad --model_type bert { "_name_or_path": "google/canine-s", "architectures": [ "CanineForQuestionAnswering" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 57344, "downsampling_rate": 4, "eos_token_id": 57345, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "local_transformer_stride": 128, "max_position_embeddings": 16384, "model_type": "canine", "num_attention_heads": 12, "num_hash_buckets": 16384, "num_hash_functions": 8, "num_hidden_layers": 12, "pad_token_id": 0, "torch_dtype": "float32", "transformers_version": "4.19.0.dev0", "type_vocab_size": 16, "upsampling_kernel_size": 4, "use_cache": true } {'exact': 64.70198675496688, 'f1': 76.57594921776277}
AnonymousSub/declutr-emanuals-s10-SR
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1317183233495388160/nLbBT6WF_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">3bkreno</div> <div style="text-align: center; font-size: 14px;">@abovethebed</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 3bkreno. | Data | 3bkreno | | --- | --- | | Tweets downloaded | 484 | | Retweets | 111 | | Short tweets | -468 | | Tweets kept | 841 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/17s3cgho/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @abovethebed's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2al4dbp2) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2al4dbp2/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/abovethebed') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
AnonymousSub/declutr-model-emanuals
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - conversational --- # Ron Swanson DialoGPT Model
AnonymousSub/declutr-s10-AR
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- tags: - huggan - gan # See a list of available tags here: # https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12 # task: unconditional-image-generation or conditional-image-generation or image-to-image license: mit --- # MyModelName ## Model description Describe the model here (what it does, what it's used for, etc.) ## Intended uses & limitations #### How to use ```python # You can include sample code which will be formatted ``` #### Limitations and bias Provide examples of latent issues and potential remediations. ## Training data Describe the data you used to train the model. If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data. ## Training procedure Preprocessing, hardware used, hyperparameters... ## Eval results ## Generated Images You can embed local or remote images using `![](...)` ### BibTeX entry and citation info ```bibtex @inproceedings{..., year={2020} } ```
AnonymousSub/declutr-s10-SR
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
36
2022-04-08T01:23:57Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1580 - F1: 0.8547 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3718 | 1.0 | 269 | 0.1761 | 0.8223 | | 0.1535 | 2.0 | 538 | 0.1608 | 0.8404 | | 0.1074 | 3.0 | 807 | 0.1580 | 0.8547 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
AnonymousSub/hier_triplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2022-04-08T01:49:20Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.it metrics: - name: F1 type: f1 value: 0.7730210016155089 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2928 - F1: 0.7730 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.4548 | 1.0 | 27 | 0.6522 | 0.5457 | | 0.5214 | 2.0 | 54 | 0.3476 | 0.7404 | | 0.3186 | 3.0 | 81 | 0.2928 | 0.7730 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
AnonymousSub/hier_triplet_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.en metrics: - name: F1 type: f1 value: 0.5793693212185996 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.5084 - F1: 0.5794 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.7119 | 1.0 | 19 | 1.0009 | 0.2266 | | 0.891 | 2.0 | 38 | 0.6405 | 0.5281 | | 0.6023 | 3.0 | 57 | 0.5084 | 0.5794 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
AnonymousSub/rule_based_bert_hier_diff_equal_wts_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - conversational --- # Harry Potter2 DialoGPT Model
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1_squad2.0
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - accuracy - f1 model-index: - name: Bert_Test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bert_Test This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1965 - Precision: 0.9332 - Accuracy: 0.9223 - F1: 0.9223 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:--------:|:------:| | 0.6717 | 0.4 | 500 | 0.6049 | 0.7711 | 0.6743 | 0.6112 | | 0.5704 | 0.8 | 1000 | 0.5299 | 0.7664 | 0.7187 | 0.6964 | | 0.52 | 1.2 | 1500 | 0.4866 | 0.7698 | 0.7537 | 0.7503 | | 0.4792 | 1.6 | 2000 | 0.4292 | 0.8031 | 0.793 | 0.7927 | | 0.4332 | 2.0 | 2500 | 0.3920 | 0.8318 | 0.8203 | 0.8198 | | 0.381 | 2.4 | 3000 | 0.3723 | 0.9023 | 0.8267 | 0.8113 | | 0.3625 | 2.8 | 3500 | 0.3134 | 0.8736 | 0.8607 | 0.8601 | | 0.3325 | 3.2 | 4000 | 0.2924 | 0.8973 | 0.871 | 0.8683 | | 0.3069 | 3.6 | 4500 | 0.2671 | 0.8916 | 0.8847 | 0.8851 | | 0.2866 | 4.0 | 5000 | 0.2571 | 0.8920 | 0.8913 | 0.8926 | | 0.2595 | 4.4 | 5500 | 0.2450 | 0.8980 | 0.9 | 0.9015 | | 0.2567 | 4.8 | 6000 | 0.2246 | 0.9057 | 0.9043 | 0.9054 | | 0.2255 | 5.2 | 6500 | 0.2263 | 0.9332 | 0.905 | 0.9030 | | 0.2237 | 5.6 | 7000 | 0.2083 | 0.9265 | 0.9157 | 0.9156 | | 0.2248 | 6.0 | 7500 | 0.2039 | 0.9387 | 0.9193 | 0.9185 | | 0.2086 | 6.4 | 8000 | 0.2038 | 0.9436 | 0.9193 | 0.9181 | | 0.2029 | 6.8 | 8500 | 0.1965 | 0.9332 | 0.9223 | 0.9223 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
AnonymousSub/rule_based_twostage_quadruplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- language: - es tags: - biomedical - clinical - ehr - spanish license: apache-2.0 metrics: - ppl widget: - text: "El único antecedente personal a reseñar era la <mask> arterial." - text: "Las radiologías óseas de cuerpo entero no detectan alteraciones <mask>, ni alteraciones vertebrales." - text: "En el <mask> toraco-abdómino-pélvico no se encontraron hallazgos patológicos de interés." --- # Biomedical-clinical language model for Spanish ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Evaluation](#evaluation) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description Biomedical pretrained language model for Spanish. For more details about the corpus, the pretraining and the evaluation, check the official [repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es). ## Intended uses and limitations The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification. ## How to use ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training ### Tokenization and model pretraining This model is a [RoBERTa-based](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model trained on a **biomedical-clinical** corpus in Spanish collected from several sources (see next section). The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences. ### Training corpora and preprocessing The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers, and a real-world clinical corpus collected from more than 278K clinical documents and notes. To obtain a high-quality training corpus while retaining the idiosyncrasies of the clinical language, a cleaning pipeline has been applied only to the biomedical corpora, keeping the clinical corpus uncleaned. Essentially, the cleaning operations used are: - data parsing in different formats - sentence splitting - language detection - filtering of ill-formed sentences - deduplication of repetitive contents - keep the original document boundaries Then, the biomedical corpora are concatenated and further global deduplication among the biomedical corpora has been applied. Eventually, the clinical corpus is concatenated to the cleaned biomedical corpus resulting in a medium-size biomedical-clinical corpus for Spanish composed of more than 1B tokens. The table below shows some basic statistics of the individual cleaned corpora: | Name | No. tokens | Description | |-----------------------------------------------------------------------------------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [Medical crawler](https://zenodo.org/record/4561970) | 903,558,13 | Crawler of more than 3,000 URLs belonging to Spanish biomedical and health domains. | | Clinical cases misc. | 102,855,267 | A miscellany of medical content, essentially clinical cases. Note that a clinical case report is a scientific publication where medical practitioners share patient cases and it is different from a clinical note or document. | | EHR documents | 95,267,20 | Collection of more than 278K clinical documents, including discharge reports, clinical course notes and X-ray reports, for a total of 91M tokens. | | [Scielo](https://zenodo.org/record/2541681#.YlP1DshBwio) | 60,007,289 | Publications written in Spanish crawled from the Spanish SciELO server in 2017. | | [BARR2_background](https://temu.bsc.es/BARR2/downloads/background_set.raw_text.tar.bz2) | 24,516,442 | Biomedical Abbreviation Recognition and Resolution (BARR2) containing Spanish clinical case study sections from a variety of clinical disciplines. | | Wikipedia_life_sciences | 13,890,501 | Wikipedia articles crawled 04/01/2021 with the [Wikipedia API python library](https://pypi.org/project/Wikipedia-API/) starting from the "Ciencias\_de\_la\_vida" category up to a maximum of 5 subcategories. Multiple links to the same articles are then discarded to avoid repeating content. | | Patents | 13,463,387 | Google Patent in Medical Domain for Spain (Spanish). The accepted codes (Medical Domain) for Json files of patents are: "A61B", "A61C","A61F", "A61H", "A61K", "A61L","A61M", "A61B", "A61P". | | [EMEA](http://opus.nlpl.eu/download.php?f=EMEA/v3/moses/en-es.txt.zip) | 5,377,448 | Spanish-side documents extracted from parallel corpora made out of PDF documents from the European Medicines Agency. | | [mespen_Medline](https://zenodo.org/record/3562536#.YTt1fH2xXbR) | 4,166,077 | Spanish-side articles extracted from a collection of Spanish-English parallel corpus consisting of biomedical scientific literature. The collection of parallel resources is aggregated from the MedlinePlus source. | | PubMed | 1,858,966 | Open-access articles from the PubMed repository crawled in 2017. | ## Evaluation The model has been fine-tuned on three Named Entity Recognition (NER) tasks using three clinical NER datasets: - [PharmaCoNER](https://zenodo.org/record/4270158): is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: https://temu.bsc.es/pharmaconer/). - [CANTEMIST](https://zenodo.org/record/3978041#.YTt5qH2xXbQ): is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: https://zenodo.org/record/3978041#.YTt5qH2xXbQ). - ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables. We addressed the NER task as a token classification problem using a standard linear layer along with the BIO tagging schema. We compared our models with the general-domain Spanish [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne), the general-domain multilingual model that supports Spanish [mBERT](https://huggingface.co/bert-base-multilingual-cased), the domain-specific English model [BioBERT](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2), and three domain-specific models based on continual pre-training, [mBERT-Galén](https://ieeexplore.ieee.org/document/9430499), [XLM-R-Galén](https://ieeexplore.ieee.org/document/9430499) and [BETO-Galén](https://ieeexplore.ieee.org/document/9430499). The table below shows the F1 scores obtained: | Tasks/Models | bsc-bio-ehr-es | XLM-R-Galén | BETO-Galén | mBERT-Galén | mBERT | BioBERT | roberta-base-bne | |--------------|----------------|--------------------|--------------|--------------|--------------|--------------|------------------| | PharmaCoNER | **0.8913** | 0.8754 | 0.8537 | 0.8594 | 0.8671 | 0.8545 | 0.8474 | | CANTEMIST | **0.8340** | 0.8078 | 0.8153 | 0.8168 | 0.8116 | 0.8070 | 0.7875 | | ICTUSnet | **0.8756** | 0.8716 | 0.8498 | 0.8509 | 0.8631 | 0.8521 | 0.8677 | The fine-tuning scripts can be found in the official GitHub [repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to <[email protected]> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use these models, please cite our work: ```bibtext @inproceedings{carrino-etal-2022-pretrained, title = "Pretrained Biomedical Language Models for Clinical {NLP} in {S}panish", author = "Carrino, Casimiro Pio and Llop, Joan and P{\`a}mies, Marc and Guti{\'e}rrez-Fandi{\~n}o, Asier and Armengol-Estap{\'e}, Jordi and Silveira-Ocampo, Joaqu{\'\i}n and Valencia, Alfonso and Gonzalez-Agirre, Aitor and Villegas, Marta", booktitle = "Proceedings of the 21st Workshop on Biomedical Language Processing", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.bionlp-1.19", doi = "10.18653/v1/2022.bionlp-1.19", pages = "193--199", abstract = "This work presents the first large-scale biomedical Spanish language models trained from scratch, using large biomedical corpora consisting of a total of 1.1B tokens and an EHR corpus of 95M tokens. We compared them against general-domain and other domain-specific models for Spanish on three clinical NER tasks. As main results, our models are superior across the NER tasks, rendering them more convenient for clinical NLP applications. Furthermore, our findings indicate that when enough data is available, pre-training from scratch is better than continual pre-training when tested on clinical tasks, raising an exciting research question about which approach is optimal. Our models and fine-tuning scripts are publicly available at HuggingFace and GitHub.", } ``` ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. </details>