modelId
stringlengths
4
81
tags
sequence
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
DannyMichael/ECU911
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: blocks results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.4444444477558136 --- # blocks Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### blue color ![blue color](images/blue_color.jpg) #### cyan color ![cyan color](images/cyan_color.jpg) #### green color ![green color](images/green_color.jpg) #### orange color ![orange color](images/orange_color.jpg) #### red color ![red color](images/red_color.jpg) #### yellow color ![yellow color](images/yellow_color.jpg)
DarkKibble/DialoGPT-medium-Tankman
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - tr license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-common_voice-tr-demo-dist results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-common_voice-tr-demo-dist This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset. It achieves the following results on the evaluation set: - Loss: 0.3934 - Wer: 0.3305 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 8 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.5459 | 0.23 | 100 | 3.6773 | 1.0 | | 3.2247 | 0.46 | 200 | 3.1491 | 0.9999 | | 2.3457 | 0.69 | 300 | 2.4236 | 1.0041 | | 0.9149 | 0.92 | 400 | 0.9471 | 0.7684 | | 0.6622 | 1.15 | 500 | 0.7518 | 0.6863 | | 0.7205 | 1.38 | 600 | 0.6387 | 0.6402 | | 0.6978 | 1.61 | 700 | 0.5611 | 0.5739 | | 0.5317 | 1.84 | 800 | 0.5061 | 0.5418 | | 0.5222 | 2.07 | 900 | 0.4839 | 0.5344 | | 0.4467 | 2.3 | 1000 | 0.5060 | 0.5339 | | 0.3196 | 2.53 | 1100 | 0.4619 | 0.5213 | | 0.276 | 2.76 | 1200 | 0.4595 | 0.5020 | | 0.3569 | 2.99 | 1300 | 0.4339 | 0.4901 | | 0.2236 | 3.22 | 1400 | 0.4602 | 0.4887 | | 0.293 | 3.45 | 1500 | 0.4376 | 0.4639 | | 0.1677 | 3.68 | 1600 | 0.4371 | 0.4605 | | 0.1838 | 3.91 | 1700 | 0.4116 | 0.4589 | | 0.1225 | 4.14 | 1800 | 0.4144 | 0.4495 | | 0.2301 | 4.37 | 1900 | 0.4250 | 0.4567 | | 0.1931 | 4.6 | 2000 | 0.4081 | 0.4470 | | 0.1427 | 4.83 | 2100 | 0.4295 | 0.4482 | | 0.361 | 5.06 | 2200 | 0.4374 | 0.4445 | | 0.3272 | 5.29 | 2300 | 0.4088 | 0.4258 | | 0.3686 | 5.52 | 2400 | 0.4087 | 0.4258 | | 0.3087 | 5.75 | 2500 | 0.4100 | 0.4371 | | 0.4637 | 5.98 | 2600 | 0.4038 | 0.4219 | | 0.1485 | 6.21 | 2700 | 0.4361 | 0.4197 | | 0.1341 | 6.44 | 2800 | 0.4217 | 0.4132 | | 0.1185 | 6.67 | 2900 | 0.4244 | 0.4097 | | 0.1588 | 6.9 | 3000 | 0.4212 | 0.4181 | | 0.0697 | 7.13 | 3100 | 0.3981 | 0.4073 | | 0.0491 | 7.36 | 3200 | 0.3992 | 0.4010 | | 0.088 | 7.59 | 3300 | 0.4206 | 0.4022 | | 0.0731 | 7.82 | 3400 | 0.3998 | 0.3841 | | 0.2767 | 8.05 | 3500 | 0.4195 | 0.3829 | | 0.1725 | 8.28 | 3600 | 0.4167 | 0.3946 | | 0.1242 | 8.51 | 3700 | 0.4177 | 0.3821 | | 0.1133 | 8.74 | 3800 | 0.3993 | 0.3802 | | 0.1952 | 8.97 | 3900 | 0.4132 | 0.3904 | | 0.1399 | 9.2 | 4000 | 0.4010 | 0.3795 | | 0.047 | 9.43 | 4100 | 0.4128 | 0.3703 | | 0.049 | 9.66 | 4200 | 0.4319 | 0.3670 | | 0.0994 | 9.89 | 4300 | 0.4118 | 0.3631 | | 0.1209 | 10.11 | 4400 | 0.4296 | 0.3722 | | 0.0484 | 10.34 | 4500 | 0.4130 | 0.3615 | | 0.2065 | 10.57 | 4600 | 0.3958 | 0.3668 | | 0.133 | 10.8 | 4700 | 0.4102 | 0.3679 | | 0.0622 | 11.03 | 4800 | 0.4137 | 0.3585 | | 0.0999 | 11.26 | 4900 | 0.4042 | 0.3583 | | 0.0346 | 11.49 | 5000 | 0.4183 | 0.3573 | | 0.072 | 11.72 | 5100 | 0.4060 | 0.3530 | | 0.0365 | 11.95 | 5200 | 0.3968 | 0.3483 | | 0.0615 | 12.18 | 5300 | 0.3958 | 0.3485 | | 0.1067 | 12.41 | 5400 | 0.3987 | 0.3453 | | 0.0253 | 12.64 | 5500 | 0.4182 | 0.3405 | | 0.0636 | 12.87 | 5600 | 0.4199 | 0.3458 | | 0.0506 | 13.1 | 5700 | 0.4056 | 0.3412 | | 0.0944 | 13.33 | 5800 | 0.4061 | 0.3381 | | 0.1187 | 13.56 | 5900 | 0.4113 | 0.3381 | | 0.0237 | 13.79 | 6000 | 0.3973 | 0.3343 | | 0.0166 | 14.02 | 6100 | 0.4001 | 0.3357 | | 0.1189 | 14.25 | 6200 | 0.3931 | 0.3315 | | 0.0375 | 14.48 | 6300 | 0.3944 | 0.3329 | | 0.0537 | 14.71 | 6400 | 0.3953 | 0.3308 | | 0.045 | 14.94 | 6500 | 0.3933 | 0.3303 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.9.1+cu102 - Datasets 1.13.3 - Tokenizers 0.11.6
DarkWolf/kn-electra-small
[ "pytorch", "electra", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - mlner2021 metrics: - precision - recall - f1 - accuracy model-index: - name: mlner-mlwptok-muril results: - task: name: Token Classification type: token-classification dataset: name: mlner2021 type: mlner2021 args: MLNER2021 metrics: - name: Precision type: precision value: 0.0 - name: Recall type: recall value: 0.0 - name: F1 type: f1 value: 0.0 - name: Accuracy type: accuracy value: 0.8112759262826688 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mlner-mlwptok-muril This model is a fine-tuned version of [google/muril-base-cased](https://huggingface.co/google/muril-base-cased) on the mlner2021 dataset. It achieves the following results on the evaluation set: - Loss: 0.8331 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.8113 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:---:|:--------:| | 1.447 | 1.0 | 1389 | 0.9396 | 0.0 | 0.0 | 0.0 | 0.8113 | | 0.898 | 2.0 | 2778 | 0.8883 | 0.0 | 0.0 | 0.0 | 0.8113 | | 0.859 | 3.0 | 4167 | 0.8721 | 0.0 | 0.0 | 0.0 | 0.8113 | | 0.8302 | 4.0 | 5556 | 0.8666 | 0.0 | 0.0 | 0.0 | 0.8113 | | 0.8165 | 5.0 | 6945 | 0.8403 | 0.0 | 0.0 | 0.0 | 0.8113 | | 0.8143 | 6.0 | 8334 | 0.8376 | 0.0 | 0.0 | 0.0 | 0.8113 | | 0.8034 | 7.0 | 9723 | 0.8393 | 0.0 | 0.0 | 0.0 | 0.8113 | | 0.7766 | 8.0 | 11112 | 0.8383 | 0.0 | 0.0 | 0.0 | 0.8113 | | 0.794 | 9.0 | 12501 | 0.8346 | 0.0 | 0.0 | 0.0 | 0.8113 | | 0.7858 | 10.0 | 13890 | 0.8331 | 0.0 | 0.0 | 0.0 | 0.8113 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
DarkestSky/distilbert-base-uncased-finetuned-ner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer model-index: name: Gram-Vaani-Harveen-Chadda-Fine-Tuning --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Gram-Vaani-Harveen-Chadda-Fine-Tuning This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-hindi-him-4200](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-hindi-him-4200) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8934 - Wer: 0.359 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-----:| | 4.5558 | 21.05 | 400 | 0.8934 | 0.359 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
Darya/layoutlmv2-finetuned-funsd-test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-04-12T07:41:18Z
--- license: mit --- # ReACC-py-retriever This is the retrieval model for [ReACC: A Retrieval-Augmented Code Completion Framework](https://arxiv.org/abs/2203.07722). In this paper, the model is used to retrieve similar codes given an incompletion code snippet as query. The model can be also used for incomplete code-to-code search, code clone detection. `py-retriever` is BERT-like encoder consisting of 12 transformer layers. It is continual pre-trained on [GraphCodeBERT](https://huggingface.co/microsoft/graphcodebert-base) with contrastive learning in Python programming language. More details can be found in our paper. Note that the format of input codes is different from original source code. We normalize the source codes to better capture information from line break and indention in Python. An example of input is: ```python sum = 0<endofline>for val in numbers:<endofline><INDENT>sum = sum+val ``` To get more information about how to convert source codes into this format, please refer to [ReACC GitHub repo](https://github.com/microsoft/ReACC).
Daryaflp/roberta-retrained_ru_covid
[ "pytorch", "tensorboard", "roberta", "fill-mask", "transformers", "generated_from_trainer", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - generated_from_trainer model-index: - name: kobigbird-bert-base-finetuned-klue results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-bert-base-finetuned-klue This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.5589 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.888 | 13.89 | 500 | 3.5589 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
DataikuNLP/TinyBERT_General_4L_312D
[ "pytorch", "jax", "bert", "arxiv:1909.10351", "transformers" ]
null
{ "architectures": null, "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
74
null
## This is a PyTorch implementation of the paper [Multi-Source Domain Adaptation Based on Federated Knowledge Alignment](https://arxiv.org/abs/2203.11635). ## Table of Contents * [General information](#general-information) * [Running the systems](#running-the-systems) * [Further readings](#further-readings) ## General information FedKA that consists of three building blocks, i.e., features disentangler, embedding matching, and federated voting, aims to improve the global model’s generality in tackling an unseen task with knowledge transferred from different clients’ model learning. ## Running the systems The systems in the Digit-Five tasks can be run with the Jupyter Notebook "FedKA-Digit-Five.ipynb". The dataset can be downloaded from [Digit-Five.](https://drive.google.com/drive/folders/1nwa-9TPm_-pZsE9uNalaDS909LXNaFjT?usp=sharing) ## Further readings * [Decentralized Deep Learning for Multi-Access Edge Computing: A Survey on Communication Efficiency and Trustworthiness](https://www.techrxiv.org/articles/preprint/Decentralized_Deep_Learning_for_Multi-Access_Edge_Computing_A_Survey_on_Communication_Efficiency_and_Trustworthiness/16691230), Yuwei Sun et al., IEEE Transactions on Artificial Intelligence.
DataikuNLP/camembert-base
[ "pytorch", "tf", "camembert", "fill-mask", "fr", "dataset:oscar", "arxiv:1911.03894", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "CamembertForMaskedLM" ], "model_type": "camembert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-cased-IUChatbot-ontologyDts-bertBaseCased-bertTokenizer-12April2022 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-IUChatbot-ontologyDts-bertBaseCased-bertTokenizer-12April2022 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3856 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 357 | 0.4760 | | 0.6305 | 2.0 | 714 | 0.3957 | | 0.4345 | 3.0 | 1071 | 0.3856 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
DataikuNLP/paraphrase-multilingual-MiniLM-L12-v2
[ "pytorch", "bert", "arxiv:1908.10084", "sentence-transformers", "feature-extraction", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,517
null
--- tags: - huggan - gan # See a list of available tags here: # https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12 # task: unconditional-image-generation or conditional-image-generation or image-to-image license: mit --- # MyModelName ## Model description Describe the model here (what it does, what it's used for, etc.) ## Intended uses & limitations #### How to use ```python # You can include sample code which will be formatted ``` #### Limitations and bias Provide examples of latent issues and potential remediations. ## Training data Describe the data you used to train the model. If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data. ## Training procedure Preprocessing, hardware used, hyperparameters... ## Eval results ## Generated Images You can embed local or remote images using `![](...)` ### BibTeX entry and citation info ```bibtex @inproceedings{..., year={2020} } ```
DavidSpaceG/MSGIFSR
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - detectron2 - layout_parser --- Model binaries downloaded from https://github.com/Layout-Parser/layout-parser/blob/c0044a08da7a630e2241348e597a08ba6aa87ba1/src/layoutparser/models/detectron2/catalog.py
Davlan/bert-base-multilingual-cased-finetuned-amharic
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
109
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: tf-distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tf-distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Tokenizers 0.11.6
Davlan/bert-base-multilingual-cased-finetuned-hausa
[ "pytorch", "tf", "jax", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
151
null
--- language: en datasets: - librispeech_asr tags: - audio - automatic-speech-recognition - hf-asr-leaderboard license: apache-2.0 widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac model-index: - name: patrickvonplaten/wav2vec2-base-960h-4-gram results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 2.59 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 6.46 --- # Wav2Vec2-Base-960h + 4-gram This model is identical to [Facebook's Wav2Vec2-Base-960h](https://huggingface.co/facebook/wav2vec2-base-960h), but is augmented with an English 4-gram. The `4-gram.arpa.gz` of [Librispeech's official ngrams](https://www.openslr.org/11) is used. ## Evaluation This code snippet shows how to evaluate **patrickvonplaten/wav2vec2-base-960h-4-gram** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import AutoModelForCTC, AutoProcessor import torch from jiwer import wer model_id = "patrickvonplaten/wav2vec2-base-960h-4-gram" librispeech_eval = load_dataset("librispeech_asr", "other", split="test") model = AutoModelForCTC.from_pretrained(model_id).to("cuda") processor = AutoProcessor.from_pretrained(model_id) def map_to_pred(batch): inputs = processor(batch["audio"]["array"], sampling_rate=16_000, return_tensors="pt") inputs = {k: v.to("cuda") for k,v in inputs.items()} with torch.no_grad(): logits = model(**inputs).logits transcription = processor.batch_decode(logits.cpu().numpy()).text[0] batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, remove_columns=["audio"]) print(wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | |---|---| | 2.59 | 6.46 |
Davlan/bert-base-multilingual-cased-finetuned-kinyarwanda
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- language: en datasets: - librispeech_asr tags: - audio - automatic-speech-recognition - hf-asr-leaderboard license: apache-2.0 widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac model-index: - name: patrickvonplaten/wav2vec2-large-960h-lv60-self-4-gram results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 1.84 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 3.71 --- # Wav2Vec2-Base-960h + 4-gram This model is identical to [Facebook's Wav2Vec2-Large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self), but is augmented with an English 4-gram. The `4-gram.arpa.gz` of [Librispeech's official ngrams](https://www.openslr.org/11) is used. ## Evaluation This code snippet shows how to evaluate **patrickvonplaten/wav2vec2-large-960h-lv60-self-4-gram** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import AutoModelForCTC, AutoProcessor import torch from jiwer import wer model_id = "patrickvonplaten/wav2vec2-large-960h-lv60-self-4-gram" librispeech_eval = load_dataset("librispeech_asr", "other", split="test") model = AutoModelForCTC.from_pretrained(model_id).to("cuda") processor = AutoProcessor.from_pretrained(model_id) def map_to_pred(batch): inputs = processor(batch["audio"]["array"], sampling_rate=16_000, return_tensors="pt") inputs = {k: v.to("cuda") for k,v in inputs.items()} with torch.no_grad(): logits = model(**inputs).logits transcription = processor.batch_decode(logits.cpu().numpy()).text[0] batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, remove_columns=["audio"]) print(wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | |---|---| | 1.84 | 3.71 |
Davlan/xlm-roberta-base-finetuned-luganda
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "XLMRobertaForMaskedLM" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- tags: - huggan - gan # See a list of available tags here: # https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12 # task: unconditional-image-generation or conditional-image-generation or image-to-image license: mit --- # MyModelName ## Model description Describe the model here (what it does, what it's used for, etc.) ## Intended uses & limitations #### How to use ```python # You can include sample code which will be formatted ``` #### Limitations and bias Provide examples of latent issues and potential remediations. ## Training data Describe the data you used to train the model. If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data. ## Training procedure Preprocessing, hardware used, hyperparameters... ## Eval results ## Generated Images You can embed local or remote images using `![](...)` ### BibTeX entry and citation info ```bibtex @inproceedings{..., year={2020} } ```
Davlan/xlm-roberta-large-masakhaner
[ "pytorch", "tf", "xlm-roberta", "token-classification", "arxiv:2103.11811", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "XLMRobertaForTokenClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,449
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: paraphrase-multilingual-MiniLM-L12-v2-finetuned-DIT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # paraphrase-multilingual-MiniLM-L12-v2-finetuned-DIT This model is a fine-tuned version of [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 7.4783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 91 | 9.2794 | | No log | 2.0 | 182 | 8.1920 | | No log | 3.0 | 273 | 7.6378 | | No log | 4.0 | 364 | 7.4783 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cpu - Datasets 2.0.0 - Tokenizers 0.11.6
Dawn576/Dawn
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - image_folder metrics: - accuracy model-index: - name: convnext-tiny-224-finetuned-eurosat-albumentations results: - task: name: Image Classification type: image-classification dataset: name: image_folder type: image_folder args: default metrics: - name: Accuracy type: accuracy value: 0.9748148148148148 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-tiny-224-finetuned-eurosat-albumentations This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.0727 - Accuracy: 0.9748 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.141 | 1.0 | 190 | 0.1496 | 0.9544 | | 0.0736 | 2.0 | 380 | 0.0958 | 0.9719 | | 0.0568 | 3.0 | 570 | 0.0727 | 0.9748 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
Declan/Breitbart_model_v1
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - huggan - gan # See a list of available tags here: # https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12 # task: unconditional-image-generation or conditional-image-generation or image-to-image license: mit --- # MyModelName ## Model description Describe the model here (what it does, what it's used for, etc.) ## Intended uses & limitations #### How to use ```python # You can include sample code which will be formatted ``` #### Limitations and bias Provide examples of latent issues and potential remediations. ## Training data Describe the data you used to train the model. If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data. ## Training procedure Preprocessing, hardware used, hyperparameters... ## Eval results ## Generated Images You can embed local or remote images using `![](...)` ### BibTeX entry and citation info ```bibtex @inproceedings{..., year={2020} } ```
Declan/Breitbart_model_v3
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: claim-spotter results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # claim-spotter This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3266 - F1: 0.8709 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3697 | 1.0 | 830 | 0.2728 | 0.8589 | | 0.1475 | 2.0 | 1660 | 0.3266 | 0.8709 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
Declan/ChicagoTribune_model_v1
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
Label ID Label Name 0 0 1. B-PER 2. I-PER 3. B-ORG 4. I-ORG 5. B-LOC 6. I-LOC
Declan/ChicagoTribune_model_v7
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - huggan - gan datasets: - arakesh/uavid-15-hq-mixedres # See a list of available tags here: # https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12 # task: unconditional-image-generation or conditional-image-generation or image-to-image license: mit --- # MyModelName ## Model description [Pix2pix Model](https://arxiv.org/abs/1611.07004) is a conditional adversarial networks, a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. ## Intended uses & limitations: Used for reconstruction of images from edges #### How to use ```python from torchvision.transforms import Compose, Resize, ToTensor, Normalize from PIL import Image from torchvision.utils import save_image import cv2 from huggan.pytorch.pix2pix.modeling_pix2pix import GeneratorUNet transform = Compose( [ Resize((256, 256), Image.BICUBIC), ToTensor(), Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ] ) model = GeneratorUNet.from_pretrained('huggan/pix2pix-uavid-15) def predict_fn(img): inp = transform(img).unsqueeze(0) out = model(inp) save_image(out, 'out.png', normalize=True) return 'out.png' predict_fn(img) ``` #### Limitations and bias * Gives unrealistic colors in the image ## Training data * [edges2shoes](https://huggingface.co/datasets/huggan/edges2shoes) ## Training procedure ``` # clone the repository git clone https://github.com/huggingface/community-events.git pip install . # change directory cd community-events/huggan/pytorch/pix2pix/ # define config accelerate config # launch training with required parameters accelerate launch train.py --checkpoint_interval 1 --dataset arakesh/uavid-15-hq-mixedres --push_to_hub --model_name pix2pix-uavid-15 --batch_size 2 --n_epochs 50 --image_size 1024 --sample_interval 500 ``` ## Generated Images Here, * First Image Row: Input Image * Second Image Row: Generated Image * Third Image Row: Target Image ![image1](34000.png) ![image2](35000.png) ### BibTeX entry and citation info ```bibtex @article{pix2pix2017, title={Image-to-Image Translation with Conditional Adversarial Networks}, author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A}, journal={CVPR}, year={2017} } ```
Declan/ChicagoTribune_model_v8
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy model-index: - name: sagemaker-distilbert-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion args: default metrics: - type: accuracy value: 0.921 name: Accuracy - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: default split: test metrics: - type: accuracy value: 0.921 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGRkMDBjODEwZWI2OTlhZmQ4ZGQ2MjRhZDMzZjA1ZTNkMWU0OTdhZTA3NjAzZGI1ZGFiMjFlNGQxY2MyM2Y2NiIsInZlcnNpb24iOjF9.lk_zOxIIclaySp7edHaCoBD4hSHBJkUNcv1z-2vhO_8Af5JYOgRjlNloztRJd9SuRISEyH4srmqsRx8hqiivAA - type: precision value: 0.8870419502496194 name: Precision Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTZhNTg0ODU0YmYxZGMxNDZhOTg4M2Y2OTUzZGZmZmQ3ZDdmMmQyMWQ1MTc3ZDIzM2ZlYjg3NGVhOTBhNzJiMiIsInZlcnNpb24iOjF9._ZojNfDN63jqrciNdn8xWhJ38IkaeIy_y8gOU0r9Wf3Ki06ZcrX4qAz8KVF9LIQffmK4EupUAlNFycxf3SZYBA - type: precision value: 0.921 name: Precision Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2Y0ZGRhYWMxYTIwOGQzYzQ1MGIxOGZkMzM5YWYxN2RhZTgyZjJiNzc2MDY3YTk4YWYyOGI0MDE0M2JiYTk0NCIsInZlcnNpb24iOjF9.tPd-tWnKPt13vGMXk_OGpCgllvinP0Pry5YAvvcjnIKo33eJ5RCKay8u5Q2TTLCU71Lndf_x-A2qWInLXEk-AA - type: precision value: 0.9208079974712109 name: Precision Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGUxYjkzM2MzOWNhYjIyZGE4ODYyY2E1MTRiMGNiMGM3NDk1Y2Q3ZjEyZDAzY2ZhZTFjYmE3YzY1MjM0YWMzZiIsInZlcnNpb24iOjF9.XNf83HOOYCJmb_BKpNM-ullwiqLoRBQLbA4FAa6v3bfH_BLwK3vve_Ym3xa7uNRkuJGM-clvkeXEaEqAz99JBA - type: recall value: 0.8688429370077566 name: Recall Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2U0MmYzNTkyY2U0MDNiMDBjY2I3YWI0OGViZDBlZjJhNDBmZWE3NGYxMWFjMDFmMmVhM2RhZmY4ZWVlNzNkMiIsInZlcnNpb24iOjF9.J3qsAJm9T7kqmuOFs67Fq7RLEN2-cQ2RgUhqvvyO_OWXu3JVucTgCqQhpoKa1GHWVX0illbbozmAQ5OK5wBXCg - type: recall value: 0.921 name: Recall Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGQ2NGNkYWQwMWY3ZmIyNzBiMTEwM2M4MWVlNzJiMGExMjk5MmY4ODgxYjM1YjUwNGIwMjNkZDk3NTBlNjI5NCIsInZlcnNpb24iOjF9.iZgzAfNdWlyEKAWwE32o3D6Ely76ZJ2ySVxl0jBetL4YGWgOHSybrYvcZ2kB8sx3QfOc5L_vWyWNSbY5HAVeAA - type: recall value: 0.921 name: Recall Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODcwOWIxMzFjM2VmM2E1MmZlOGM1N2JiNjQwMGM0MzEzNzQ5NzJlM2I3MDdkYTMzN2NlYzU5ZDQwODBjYWFmZiIsInZlcnNpb24iOjF9.PlvoxtJ9Bj5G2w_E6Cx5VG5maRPP5dn4YzOX0xYPu_J7iiXRRLvwp12Q6vIUwsZMoBM4jACrh-rQKZ_g_yyHCw - type: f1 value: 0.87642650638535 name: F1 Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2RiMzFkMGVhYjc3MmJjOGFkYTZiYzAxMGUyOTBmYWJhYmQ3NTg2M2MxZGExOGI4NTkwOWM2ODRlMGJjZjM3ZSIsInZlcnNpb24iOjF9.hVbjwMlCeyjJ-0BEhGuaI5T8MOsAkAgLTnp7zlhUEi2cireIEfAkpdsmBPuQJyZYaGZ5ZXmSybAP08X1ouNoBw - type: f1 value: 0.9209999999999999 name: F1 Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTk0ODExMTliNTJlYjExNWExNDQ2NDUwNjkyMjA3ODg5YTk0NmFhNmMxZGQ0MzMxZjgxNGFjMmNkZWI1MTMzOCIsInZlcnNpb24iOjF9.dqucaDtPQ0A1KZkT4q9Ojfgtf2wZiJmjaKrvTdbhsvf7gNfYnJsMGaDIOxp_YoCEXGRMXKsknANx_VA7mOKSDA - type: f1 value: 0.9203938811554648 name: F1 Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDQ1MTYxMTFiNWYwZDExYThjMDVmODdiMmRjMmQzMzJmMWY5MWE0M2VhZmExZTEwMzFlMDQ2MWIyOTFjZDc4MyIsInZlcnNpb24iOjF9.T-HlP7Fl6NuPmqps7wHkTuGi_8wF6u6BuulCxX0sp8ocEP3j8GNH9goydsKTEHyLMmch9QuCrzqFmmGAW-wVAA - type: loss value: 0.23216550052165985 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzVhNzYwZWIyN2QzMjU2OThiZjRmMjFlYTQ2MDA3ZTVmNmFkYzE1NDA1OWQzOTM4ZmRiMmQ0OGE2MzY4ZTY1ZCIsInZlcnNpb24iOjF9.Zj38hE02ePkNK7m1dhPq_N25CC9p0ZekFyCSBAS534GfhFuNhtUFhcgr6DDjyPTbn906RJDmVNxu7g01eCarAw --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sagemaker-distilbert-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2322 - Accuracy: 0.921 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9306 | 1.0 | 500 | 0.2322 | 0.921 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
Declan/FoxNews_model_v2
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: test_model1.2_update results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_model1.2_update This model is a fine-tuned version of [Helsinki-NLP/opus-mt-mul-en](https://huggingface.co/Helsinki-NLP/opus-mt-mul-en) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.6296 - Bleu: 4.0505 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
Declan/FoxNews_model_v4
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPT2Neo1.3BPoints3") model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPT2Neo1.3BPoints3") ``` ``` - moviepass to return - this summer - swooped up by - original co-founder stacy spikes text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes. *** - middle schools do not have recess - should get back to doing it - amazing for communication - and getting kids to move around text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity. *** - ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` Keywords to sentences or sentence.
Declan/FoxNews_model_v5
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: cc-by-4.0 --- # BART-base fine-tuned on NaturalQuestions for **Question Generation** [BART Model](https://arxiv.org/pdf/1910.13461.pdf) trained for Question Generation in an unsupervised manner using [Self-Training](https://arxiv.org/pdf/2104.08801.pdf) algorithm (Kulshreshtha et al, EMNLP 2021). The dataset used are unaligned questions and passages from [MLQuestions dataset](https://github.com/McGill-NLP/MLQuestions/tree/main/data). ## Details of Self-Training The Self-Training algorithm was presented as a baseline algorithm to compete with proposed Back-Training in [Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval](https://arxiv.org/pdf/2104.08801.pdf) by *Devang Kulshreshtha, Robert Belfer, Iulian Vlad Serban, Siva Reddy* in Here the abstract: In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA) from source to target domain. While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between the target domain and synthetic data distribution, and reduces model overfitting to the source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU4 points on generation, and 17.6% top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation datasetMLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs. ## Model training 🏋️‍ The training script can be found [here](https://github.com/McGill-NLP/MLQuestions/blob/main/UDA-SelfTraining.sh) ## Model in Action 🚀 ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM #Load the tokenizer tokenizer = AutoTokenizer.from_pretrained("geekydevu/bart-qg-mlquestions-selftraining") #Load the model model = AutoModelForSeq2SeqLM.from_pretrained("geekydevu/bart-qg-mlquestions-selftraining") ``` ## Citation If you want to cite this model you can use this: ```bibtex @inproceedings{kulshreshtha-etal-2021-back, title = "Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval", author = "Kulshreshtha, Devang and Belfer, Robert and Serban, Iulian Vlad and Reddy, Siva", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.566", pages = "7064--7078", abstract = "In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.", } ``` > Created by [Devang Kulshreshtha](https://geekydevu.netlify.app/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
Declan/NPR_model_v5
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - conversational --- # Philip DialoGPT Model
Declan/NewYorkTimes_model_v6
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - text-classification - generic library_name: generic widget: - text: 'This video is sponsored by squarespace' example_title: Sponsor - text: 'Check out the merch at linustechtips.com' example_title: Unpaid/self promotion - text: "Don't forget to like, comment and subscribe" example_title: Interaction reminder - text: 'pqh4LfPeCYs,824.695,826.267,826.133,829.876,835.933,927.581' example_title: Extract text from video ---
Declan/Politico_model_v5
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
These are files for the trained protein localization prediction model PB-Chlamy, created for the paper **"A Chloroplast Protein Atlas Reveals Novel Structures and Spatial Organization of Biosynthetic Pathways"** by Lianyong Wang, Weronika Patena, Kelly A. Van Baalen, Yihua Xie, Emily R. Singer, Sophia Gavrilenko, Michelle Warren-Williams, Linqu Han, Henry Harrigan, Vivian Chen, Vinh Ton, Saw Kyin, Henry H. Shwe, Matthew H. Cahn, Alexandra Wilson, Jianping Hu, Christoph Benning, Danny J. Schnell, Claire D. McWhite, Martin Jonikas (submitted for publication in May 2022).
DeepChem/ChemBERTa-77M-MTR
[ "pytorch", "roberta", "transformers" ]
null
{ "architectures": [ "RobertaForRegression" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7,169
null
--- license: apache-2.0 --- # DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings [![GitHub Stars](https://img.shields.io/github/stars/voidism/DiffCSE?style=social)](https://github.com/voidism/DiffCSE/) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb) arXiv link: https://arxiv.org/abs/2204.10298 To be published in [**NAACL 2022**](https://2022.naacl.org/) Authors: [Yung-Sung Chuang](https://people.csail.mit.edu/yungsung/), [Rumen Dangovski](http://super-ms.mit.edu/rumen.html), [Hongyin Luo](http://people.csail.mit.edu/hyluo/), [Yang Zhang](https://mitibmwatsonailab.mit.edu/people/yang-zhang/), [Shiyu Chang](https://code-terminator.github.io/), [Marin Soljačić](http://www.mit.edu/~soljacic/marin.html), [Shang-Wen Li](https://swdanielli.github.io/), [Scott Wen-tau Yih](https://scottyih.org/), [Yoon Kim](https://people.csail.mit.edu/yoonkim/), [James Glass](http://groups.csail.mit.edu/sls/people/glass.shtml) Our code is mainly based on the code of [SimCSE](https://arxiv.org/abs/2104.08821). Please refer to their [repository](https://github.com/princeton-nlp/SimCSE) for more detailed information. ## Overview ![DiffCSE](https://github.com/voidism/DiffCSE/raw/master/diffcse.png) We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning [(Dangovski et al., 2021)](https://arxiv.org/abs/2111.00899), which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other "harmful" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks. ## Setups [![Python](https://img.shields.io/badge/python-3.9.5-blue?logo=python&logoColor=FED643)](https://www.python.org/downloads/release/python-395/) ### Requirements * Python 3.9.5 ### Install our customized Transformers package ``` cd transformers-4.2.1 pip install . ``` > If you have already installed `transformers==4.2.1` through pip, you need to put `modeling_bert.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_bert.py` and `modeling_roberta.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_roberta.py`. > We modify these two files in the package so that we can perform _conditional_ pretraining tasks using BERT/RoBERTa. If possible, please directly pip install our customized Transformers package. ### Install other packages ``` pip install -r requirements.txt ``` ### Download the pretraining dataset ``` cd data bash download_wiki.sh ``` ### Download the downstream dataset ``` cd SentEval/data/downstream/ bash download_dataset.sh ``` ## Training (The same as `run_diffcse.sh`.) ```bash python train.py \ --model_name_or_path bert-base-uncased \ --generator_name distilbert-base-uncased \ --train_file data/wiki1m_for_simcse.txt \ --output_dir <your_output_model_dir> \ --num_train_epochs 2 \ --per_device_train_batch_size 64 \ --learning_rate 7e-6 \ --max_seq_length 32 \ --evaluation_strategy steps \ --metric_for_best_model stsb_spearman \ --load_best_model_at_end \ --eval_steps 125 \ --pooler_type cls \ --mlp_only_train \ --overwrite_output_dir \ --logging_first_step \ --logging_dir <your_logging_dir> \ --temp 0.05 \ --do_train \ --do_eval \ --batchnorm \ --lambda_weight 0.005 \ --fp16 --masking_ratio 0.30 ``` Our new arguments: * `--lambda_weight`: the lambda coefficient mentioned in Section 3 of our paper. * `--masking_ratio`: the masking ratio for MLM generator to randomly replace tokens. * `--generator_name`: the model name of generator. For `bert-base-uncased`, we use `distilbert-base-uncased`. For `roberta-base`, we use `distilroberta-base`. Arguments from [SimCSE](https://github.com/princeton-nlp/SimCSE): * `--train_file`: Training file path (`data/wiki1m_for_simcse.txt`). * `--model_name_or_path`: Pre-trained checkpoints to start with such as BERT-based models (`bert-base-uncased`, `bert-large-uncased`, etc.) and RoBERTa-based models (`RoBERTa-base`, `RoBERTa-large`). * `--temp`: Temperature for the contrastive loss. We always use `0.05`. * `--pooler_type`: Pooling method. * `--mlp_only_train`: For unsupervised SimCSE or DiffCSE, it works better to train the model with MLP layer but test the model without it. You should use this argument when training unsupervised SimCSE/DiffCSE models. For the results in our paper, we use a NVidia 2080Ti GPU with CUDA 11.2. Using different types of devices or different versions of CUDA/Python/PyTorch may lead to slightly different performance. ## Evaluation [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb) We provide a simple colab notebook to reproduce our results easily. We can also run the commands below for evaluation: ```bash python evaluation.py \ --model_name_or_path <your_output_model_dir> \ --pooler cls_before_pooler \ --task_set <sts|transfer|full> \ --mode test ``` To evaluate our pretrained DiffCSE checkpoints, we can use the following scripts: ### BERT #### STS ```bash python evaluation.py \ --model_name_or_path voidism/diffcse-bert-base-uncased-sts \ --pooler cls_before_pooler \ --task_set sts \ --mode test ``` #### Transfer Tasks ```bash python evaluation.py \ --model_name_or_path voidism/diffcse-bert-base-uncased-trans \ --pooler cls_before_pooler \ --task_set transfer \ --mode test ``` ### RoBERTa #### STS ```bash python evaluation.py \ --model_name_or_path voidism/diffcse-roberta-base-sts \ --pooler cls_before_pooler \ --task_set sts \ --mode test ``` #### Transfer Tasks ```bash python evaluation.py \ --model_name_or_path voidism/diffcse-roberta-base-trans \ --pooler cls_before_pooler \ --task_set transfer \ --mode test ``` For more detailed information, please check [SimCSE's GitHub repo](https://github.com/princeton-nlp/SimCSE). ## Pretrained models [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97-Models-yellow)](https://huggingface.co/voidism) * DiffCSE-BERT-base (STS): https://huggingface.co/voidism/diffcse-bert-base-uncased-sts * DiffCSE-BERT-base (transfer tasks): https://huggingface.co/voidism/diffcse-bert-base-uncased-trans * DiffCSE-RoBERTa-base (STS): https://huggingface.co/voidism/diffcse-roberta-base-sts * DiffCSE-RoBERTa-base (transfer tasks): https://huggingface.co/voidism/diffcse-roberta-base-trans We can load the models using the API provided by [SimCSE](https://github.com/princeton-nlp/SimCSE). See [Getting Started](https://github.com/princeton-nlp/SimCSE#getting-started) for more information. ```python from diffcse import DiffCSE model_bert_sts = DiffCSE("voidism/diffcse-bert-base-uncased-sts") model_bert_trans = DiffCSE("voidism/diffcse-bert-base-uncased-trans") model_roberta_sts = DiffCSE("voidism/diffcse-roberta-base-sts") model_roberta_trans = DiffCSE("voidism/diffcse-roberta-base-trans") ``` ## Citations [![DOI](https://img.shields.io/badge/DOI-10.48550/arXiv.2204.10298-green?color=FF8000?color=009922)](https://doi.org/10.48550/arXiv.2204.10298) Please cite our paper and the SimCSE paper if they are helpful to your work! ```bibtex @inproceedings{chuang2022diffcse, title={{DiffCSE}: Difference-based Contrastive Learning for Sentence Embeddings}, author={Chuang, Yung-Sung and Dangovski, Rumen and Luo, Hongyin and Zhang, Yang and Chang, Shiyu and Soljacic, Marin and Li, Shang-Wen and Yih, Wen-tau and Kim, Yoon and Glass, James}, booktitle={Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)}, year={2022} } @inproceedings{gao2021simcse, title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings}, author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi}, booktitle={Empirical Methods in Natural Language Processing (EMNLP)}, year={2021} } ```
DeepESP/gpt2-spanish
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "es", "dataset:ebooks", "transformers", "GPT-2", "Spanish", "ebooks", "nlg", "license:mit", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,463
null
--- license: bsd-3-clause --- # CodeGen (CodeGen-NL 6B) ## Model description CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`). The checkpoint included in this repository is denoted as **CodeGen-NL 6B** in the paper, where "NL" means it is pre-trained on the Pile and "6B" refers to the number of trainable parameters. ## Training data This checkpoint (CodeGen-NL 6B) was pre-trained on [the Pile](https://github.com/EleutherAI/the-pile), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai/). Parts of the dataset include code data. ## Training procedure CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs. The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism. See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details. ## Evaluation results We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details. ## Intended Use and Limitations As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them. However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well. ## How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-6B-nl") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-6B-nl") text = "def hello_world():" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` ## BibTeX entry and citation info ```bibtex @article{Nijkamp2022ACP, title={A Conversational Paradigm for Program Synthesis}, author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming}, journal={arXiv preprint}, year={2022} } ```
DeepPavlov/bert-base-cased-conversational
[ "pytorch", "jax", "bert", "feature-extraction", "en", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3,009
null
--- license: bsd-3-clause --- # CodeGen (CodeGen-Multi 16B) ## Model description CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`). The checkpoint included in this repository is denoted as **CodeGen-Multi 16B** in the paper, where "Multi" means the model is initialized with *CodeGen-NL 16B* and further pre-trained on a dataset of multiple programming languages, and "16B" refers to the number of trainable parameters. ## Training data This checkpoint (CodeGen-Multi 16B) was firstly initialized with *CodeGen-NL 16B*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python. ## Training procedure CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs. The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism. See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details. ## Evaluation results We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details. ## Intended Use and Limitations As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them. However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well. ## How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-16B-multi") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-16B-multi") text = "def hello_world():" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` ## BibTeX entry and citation info ```bibtex @article{Nijkamp2022ACP, title={A Conversational Paradigm for Program Synthesis}, author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming}, journal={arXiv preprint}, year={2022} } ```
DeepPavlov/bert-base-multilingual-cased-sentence
[ "pytorch", "jax", "bert", "feature-extraction", "multilingual", "arxiv:1704.05426", "arxiv:1809.05053", "arxiv:1908.10084", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
140
null
--- license: bsd-3-clause --- # CodeGen (CodeGen-Mono 16B) ## Model description CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`). The checkpoint included in this repository is denoted as **CodeGen-Mono 16B** in the paper, where "Mono" means the model is initialized with *CodeGen-Multi 16B* and further pre-trained on a Python programming language dataset, and "16B" refers to the number of trainable parameters. ## Training data This checkpoint (CodeGen-Mono 16B) was firstly initialized with *CodeGen-Multi 16B*, and then pre-trained on BigPython dataset. The data consists of 71.7B tokens of Python programming language. See Section 2.1 of the [paper](https://arxiv.org/abs/2203.13474) for more details. ## Training procedure CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs. The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism. See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details. ## Evaluation results We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details. ## Intended Use and Limitations As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them. However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well. ## How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-16B-mono") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-16B-mono") text = "def hello_world():" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` ## BibTeX entry and citation info ```bibtex @article{Nijkamp2022ACP, title={A Conversational Paradigm for Program Synthesis}, author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming}, journal={arXiv preprint}, year={2022} } ```
DeskDown/MarianMixFT_en-ja
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: en thumbnail: http://www.huggingtweets.com/radfemman/1649830938917/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1428572680882688005/rqGxWIRJ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Radfem Ally 🇺🇸</div> <div style="text-align: center; font-size: 14px;">@radfemman</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Radfem Ally 🇺🇸. | Data | Radfem Ally 🇺🇸 | | --- | --- | | Tweets downloaded | 227 | | Retweets | 33 | | Short tweets | 14 | | Tweets kept | 180 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/29ku9tl5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @radfemman's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/33qza7xp) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/33qza7xp/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/radfemman') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Dimedrolza/DialoGPT-small-cyberpunk
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 --- # **PPO** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **PPO** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Evaluation Results mean_reward=903.00 +/- 327.58357712193083 ## Usage (with Stable-baselines3) TODO: Add your code
DingleyMaillotUrgell/homer-bot
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: apache-2.0 language: en --- **NOTE: This is the FP32 version of [Facebook's official bart-large](https://huggingface.co/facebook/bart-large/edit/main/README.md).** # BART (large-sized model) BART model pre-trained on English language. It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/bart). Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). ## Intended uses & limitations You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import BartTokenizer, BartModel tokenizer = BartTokenizer.from_pretrained('facebook/bart-large') model = BartModel.from_pretrained('facebook/bart-large') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1910-13461, author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov and Luke Zettlemoyer}, title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, journal = {CoRR}, volume = {abs/1910.13461}, year = {2019}, url = {http://arxiv.org/abs/1910.13461}, eprinttype = {arXiv}, eprint = {1910.13461}, timestamp = {Thu, 31 Oct 2019 14:02:26 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
DivyanshuSheth/T5-Seq2Seq-Final
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - huggan - gan datasets: - huggan/maps # See a list of available tags here: # https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12 # task: unconditional-image-generation or conditional-image-generation or image-to-image license: mit --- # Pix2Pix trained on the maps dataset ## Model description This model is a [Pix2Pix](https://arxiv.org/abs/1611.07004) model trained on the [huggan/maps](https://huggingface.co/datasets/huggan/maps) dataset. The goal for the model is to turn a satellite map into a geographic map à la Google Maps, and the other way around. The model was trained using the [example script](https://github.com/huggingface/community-events/tree/main/huggan/pytorch/pix2pix) provided by HuggingFace as part of the [HugGAN sprint](https://github.com/huggingface/community-events/tree/main/huggan). ## Intended uses & limitations #### How to use ```python from huggan.pytorch.pix2pix.modeling_pix2pix import GeneratorUNet from PIL import Image from torchvision.utils import save_image image = Image.open("...") generator = GeneratorUNet.from_pretrained("huggan/pix2pix-maps") pixel_values = transform(image).unsqueeze(0) output = generator(pixel_values) save_image(output, 'output.png', normalize=True) ``` #### Limitations and bias Provide examples of latent issues and potential remediations. ## Training data The data used was huggan/maps. ## Training procedure The following command was used: ```bash accelerate launch train.py --dataset huggan/maps --push_to_hub --model_name pix2pix-maps --checkpoint_interval 1 ``` ## Eval results ## Generated Images You can embed local or remote images using `![](...)` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/IsolaZZE16, author = {Phillip Isola and Jun{-}Yan Zhu and Tinghui Zhou and Alexei A. Efros}, title = {Image-to-Image Translation with Conditional Adversarial Networks}, journal = {CoRR}, volume = {abs/1611.07004}, year = {2016}, url = {http://arxiv.org/abs/1611.07004}, eprinttype = {arXiv}, eprint = {1611.07004}, timestamp = {Mon, 13 Aug 2018 16:49:05 +0200}, biburl = {https://dblp.org/rec/journals/corr/IsolaZZE16.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
Dizoid/Lll
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-04-13T08:22:24Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: paraphrase-multilingual-MiniLM-L12-v2-finetuned-DIT-10_epochs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # paraphrase-multilingual-MiniLM-L12-v2-finetuned-DIT-10_epochs This model is a fine-tuned version of [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.6933 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 91 | 9.1280 | | No log | 2.0 | 182 | 7.7624 | | No log | 3.0 | 273 | 6.8875 | | No log | 4.0 | 364 | 6.2064 | | No log | 5.0 | 455 | 5.6836 | | 7.584 | 6.0 | 546 | 5.2978 | | 7.584 | 7.0 | 637 | 5.0191 | | 7.584 | 8.0 | 728 | 4.8337 | | 7.584 | 9.0 | 819 | 4.7284 | | 7.584 | 10.0 | 910 | 4.6933 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cpu - Datasets 2.0.0 - Tokenizers 0.11.6
Doogie/Waynehills-KE-T5-doogie
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-04-13T09:47:10Z
--- tags: autotrain language: ja widget: - text: "脅威を感じた蛇は再び襲いかかります。したがって、噛まれた際は速やかに蛇の攻撃範囲から離れましょう。 少なくとも6mは間合いを取りましょう。できる限り速やかに医療処置を求めることが大切です。ほとんどの病院は、毒蛇用の抗毒素(血清)を用意しています。病院に到着する前の応急手当だけでは、あまり症状の改善にはつながりません。被害現場からすぐさま救急サービスに通報できれば不幸中の幸いです。救急車を呼べない場合は、何としても助けを求め、みなさんまたは被害者を最寄りの病院へ搬送しなければなりません。みなさんに噛みついた蛇がガラガラヘビかどうかが分からない場合でも、すぐに病院へ直行しましょう。実際に毒が体に回り、症状が出始めたとしても、病院にいれば安心できるでしょう。噛まれた箇所を心臓よりも上に置くと、毒を含んだ血液が猛スピードで心臓に流れ込みます。救助が来るまでの間、できれば被害者の体を静止させましょう。体を動かすと血流が増大し、あっという間に毒が回ります。したがって、毒蛇に噛まれた際は体の動きを最小限に抑えて安静にすることが大切です。もちろん、みなさんの周りに誰もいなければ、じっとしている場合ではありません。すぐに助けを求めましょう。" - text: "噛み傷の周囲は大きく腫れ上がります。傷口の周りの衣類はすべて取り除きましょう。また、患部に付けているアクセサリー類も取り外しましょう。アクセサリーを付けたままにすると、患部が腫れた際に血管を締め付けてしまうため、場合によっては大切なアクセサリーを切断する羽目になります。30秒ほど噛み傷からそのまま出血させましょう。出血とともに、ある程度は毒が傷口から排出されるでしょう。毒を少しでも吸引できればそれに越したことはありません。ただし、必ず毒蛇用の吸引器を使いましょう。吸引ポンプには詳しい取扱説明書が付属しているはずですが、基本的にポンプを直接噛み傷に当てて毒を吸い出します。傷口を水で洗ってはいけません。皮膚を洗い流してしまうと、後で毒の種類を特定しにくくなります。医療者は皮膚に残った毒からガラガラヘビの種類を特定し、みなさんの症状に最適な治療法を決定します。添え木または三角巾で固定すれば、患部の血流を抑えることができます。できるだけ血流を抑えて毒の回りを遅らせましょう。腕を吊るす場合は、衣類を三角形に折り畳むか、または三角形に切りましょう。肘を中心にして腕を三角巾で包みます。みなさんまたは被害者の肘を布の形に合わせて曲げましょう。三角巾の端同士を肩口で結び合わせます。肘から先を三角形の底辺で固定したら、手を外に出します。 板切れや丸めた新聞紙を添え木にして腕や脚を支えましょう。それらが手元になければ、衣類を丸めて使いましょう。添え木を腕や脚の側面に当て、傷口の上下の関節を伸ばして固定します。ベルト、テープ、包帯といった手元にあるものを使って添え木をしっかりと縛り付けましょう。ただし、直接傷口の上から縛ってはいけません。傷口の上下いずれかの側で縛りましょう。患部の腫れが酷い場合は、添え木の圧迫を緩める必要があります。" - text: "被害者に話しかけましょう。次々に質問を投げかけ、できる限り被害者の意識を噛み傷から逸らしましょう。 不安やパニックは心拍を上昇させ、毒の回りを速めます。みなさん自身が噛まれた場合は、とにかく落ち着きましょう。何度かゆっくりと深く息をして緊張をほぐしましょう。アメリカに滞在中のみなさんは、できれば救急車を待つ間、アメリカ中毒情報センター(1-800-222-1222)に連絡を取りましょう。噛まれた箇所が腫れ上がれば、一目でそれは毒蛇によるものと判断できます。それと同時に皮膚の変色が起こるでしょう。また、一連の細かい刺傷が残る無毒蛇の噛み傷とは違い、毒蛇の噛み傷は1カ所ないしは2カ所に目立った刺傷が残るのが特徴です。これは毒牙の発達に伴い、歯が退化しているためです。さらに、めまい、患部の激しい痛み、視覚障害、他の箇所のチクチクした痛み、そして極度の発汗といった症状も、毒蛇による噛み傷のサインです。皮膚が青ざめるのはショック症状の典型です。 他のサインとして、心拍の上昇、過呼吸、吐き気、めまいなどがあります。また、被害者の瞳孔が拡大していないかをチェックしましょう。被害者がショック症状を起こしつつある場合は、仰向けに寝かせて足を30cmほど浮かせましょう。さらに、毛布や上着などで体を包んで温めましょう。呼吸、咳、または体の動きといった生体反応が見られない場合は、直ちに心肺蘇生が必要です。これらの物質は毒の回りを加速させます。したがって、ガラガラヘビに噛まれた直後にアルコールまたはカフェイン飲料で水分補給をするのは禁物です。" datasets: - vabadeh213/autotrain-data-wikihow co2_eq_emissions: 361.800665798794 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 737822494 - CO2 Emissions (in grams): 361.800665798794 ## Validation Metrics - Loss: 2.326287031173706 - Rouge1: 5.2053 - Rouge2: 1.8535 - RougeL: 5.2419 - RougeLsum: 5.228 - Gen Len: 18.3677 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/vabadeh213/autotrain-wikihow-737822494 ```
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-8
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
2022-04-13T10:18:06Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
DoyyingFace/bert-asian-hate-tweets-asian-unclean-with-clean-valid
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
--- tags: - generated_from_trainer model-index: - name: focus_sum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # focus_sum This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0575 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9644 | 3.75 | 500 | 0.6880 | | 0.4682 | 7.52 | 1000 | 0.4350 | | 0.4672 | 11.28 | 1500 | 0.2599 | | 0.3439 | 15.04 | 2000 | 0.1568 | | 0.2753 | 18.79 | 2500 | 0.1064 | | 0.1885 | 22.55 | 3000 | 0.0737 | | 0.2185 | 26.31 | 3500 | 0.0575 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.12.1
albert-base-v2
[ "pytorch", "tf", "jax", "rust", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4,785,283
2022-04-13T10:48:26Z
--- language: - multilingual - ar - bn - de - el - en - es - fi - fr - hi - id - it - ja - ko - nl - pl - pt - ru - sv - sw - te - th - tr - vi - zh thumbnail: https://github.com/studio-ousia/luke/raw/master/resources/luke_logo.png tags: - luke - named entity recognition - relation classification - question answering license: apache-2.0 --- ## mLUKE **mLUKE** (multilingual LUKE) is a multilingual extension of LUKE. Please check the [official repository](https://github.com/studio-ousia/luke) for more details and updates. This is the mLUKE base model with 12 hidden layers, 768 hidden size. The total number of parameters in this model is 561M. The model was initialized with the weights of XLM-RoBERTa(large) and trained using December 2020 version of Wikipedia in 24 languages. This model is a lite-weight version of [studio-ousia/mluke-large](https://huggingface.co/studio-ousia/mluke-large), without Wikipedia entity embeddings but only with special entities such as `[MASK]`. ### Citation If you find mLUKE useful for your work, please cite the following paper: ```latex @inproceedings{ri-etal-2022-mluke, title = "m{LUKE}: {T}he Power of Entity Representations in Multilingual Pretrained Language Models", author = "Ri, Ryokan and Yamada, Ikuya and Tsuruoka, Yoshimasa", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", year = "2022", url = "https://aclanthology.org/2022.acl-long.505", ```
openai-gpt
[ "pytorch", "tf", "rust", "safetensors", "openai-gpt", "text-generation", "en", "arxiv:1705.11168", "arxiv:1803.02324", "arxiv:1910.09700", "transformers", "license:mit", "has_space" ]
text-generation
{ "architectures": [ "OpenAIGPTLMHeadModel" ], "model_type": "openai-gpt", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
65,432
2022-04-13T11:49:54Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: javilonso/classificationEsp1_Augmented_Polarity results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # javilonso/classificationEsp1_Augmented_Polarity This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1633 - Validation Loss: 0.6795 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11565, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.6887 | 0.6011 | 0 | | 0.4452 | 0.5385 | 1 | | 0.1633 | 0.6795 | 2 | ### Framework versions - Transformers 4.17.0 - TensorFlow 2.6.0 - Datasets 2.0.0 - Tokenizers 0.11.6
0307061430/xuangou
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-04-13T13:54:18Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # ABrinkmann/sbert_xtremedistil-l6-h256-uncased-mean-cosine-h32 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 32 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ABrinkmann/sbert_xtremedistil-l6-h256-uncased-mean-cosine-h32') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ABrinkmann/sbert_xtremedistil-l6-h256-uncased-mean-cosine-h32) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 251 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 26, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 16, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 256, 'out_features': 32, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
0x7194633/keyt5-base
[ "pytorch", "t5", "text2text-generation", "ru", "transformers", "license:mit", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
--- language: - en - et tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-en-et results: - task: name: Translation eng-est type: translation args: eng-est dataset: name: flores101-devtest type: flores_101 args: eng est devtest metrics: - name: BLEU type: bleu value: 28.3 - task: name: Translation eng-est type: translation args: eng-est dataset: name: newsdev2018 type: newsdev2018 args: eng-est metrics: - name: BLEU type: bleu value: 25.2 - task: name: Translation eng-est type: translation args: eng-est dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-est metrics: - name: BLEU type: bleu value: 53.4 - task: name: Translation eng-est type: translation args: eng-est dataset: name: newstest2018 type: wmt-2018-news args: eng-est metrics: - name: BLEU type: bleu value: 26.7 --- # opus-mt-tc-big-en-et Neural machine translation model for translating from English (en) to Estonian (et). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-13 * source language(s): eng * target language(s): est * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-est/opusTCv20210807+bt_transformer-big_2022-03-13.zip) * more information released models: [OPUS-MT eng-est README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-est/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>est<< A cab is waiting.", ">>vro<< Where do you live?" ] model_name = "pytorch-models/opus-mt-tc-big-en-et" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Takso ootab. # Kus sa elad? ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-et") print(pipe(">>est<< A cab is waiting.")) # expected output: Takso ootab. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-est/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-est/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | eng-est | tatoeba-test-v2021-08-07 | 0.71255 | 53.4 | 1359 | 7992 | | eng-est | flores101-devtest | 0.61306 | 28.3 | 1012 | 19788 | | eng-est | newsdev2018 | 0.57225 | 25.2 | 2000 | 34492 | | eng-est | newstest2018 | 0.58540 | 26.7 | 2000 | 36269 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 3405783 * port time: Wed Apr 13 17:00:19 EEST 2022 * port machine: LM0-400-22516.local
AJ/rick-sanchez-bot
[ "conversational", "funny" ]
conversational
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-04-13T17:16:57Z
--- tags: - conversational --- # Harry Potter DialoGPT Model
AKulk/wav2vec2-base-timit-epochs20
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-04-13T17:46:47Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: javilonso/Mex_Rbta_TitleWithOpinion_Attraction results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # javilonso/Mex_Rbta_TitleWithOpinion_Attraction This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0064 - Validation Loss: 0.0515 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 8979, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.0780 | 0.0650 | 0 | | 0.0204 | 0.0464 | 1 | | 0.0064 | 0.0515 | 2 | ### Framework versions - Transformers 4.17.0 - TensorFlow 2.6.0 - Datasets 2.0.0 - Tokenizers 0.11.6
ATGdev/ai_ironman
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-04-13T21:50:48Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9161290322580645 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7796 - Accuracy: 0.9161 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2938 | 1.0 | 318 | 3.2905 | 0.7410 | | 2.6346 | 2.0 | 636 | 1.8833 | 0.8326 | | 1.5554 | 3.0 | 954 | 1.1650 | 0.8926 | | 1.0189 | 4.0 | 1272 | 0.8636 | 0.9110 | | 0.8028 | 5.0 | 1590 | 0.7796 | 0.9161 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.0.0 - Tokenizers 0.12.1
AaravMonkey/modelRepo
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-22T15:53:17Z
--- tags: - huggingnft - nft - huggan - gan - image - images - unconditional-image-generation datasets: - huggingnft/cryptopunks license: mit --- # Hugging NFT: cryptopunks ## Disclaimer All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright holder. ## Model description LightWeight GAN model for unconditional generation. NFT collection available [here](https://opensea.io/collection/cryptopunks). Dataset is available [here](https://huggingface.co/datasets/huggingnft/cryptopunks). Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft). Project repository: [link](https://github.com/AlekseyKorshuk/huggingnft). [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingnft?style=social)](https://github.com/AlekseyKorshuk/huggingnft) ## Intended uses & limitations #### How to use Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft). #### Limitations and bias Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft). ## Training data Dataset is available [here](https://huggingface.co/datasets/huggingnft/cryptopunks). ## Training procedure Training script is available [here](https://github.com/AlekseyKorshuk/huggingnft). ## Generated Images Check results with Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft). ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingnft?style=social)](https://github.com/AlekseyKorshuk/huggingnft) ### BibTeX entry and citation info ```bibtex @InProceedings{huggingnft, author={Aleksey Korshuk} year=2022 } ```
AdapterHub/bert-base-uncased-pf-conll2000
[ "bert", "en", "dataset:conll2000", "arxiv:2104.08247", "adapter-transformers", "token-classification", "adapterhub:chunk/conll2000" ]
token-classification
{ "architectures": null, "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2022-04-14T08:18:44Z
--- language: en --- <p align="center"> <img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%"> </p> **Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch** ## Task: classification https://github.com/mindee/doctr ### Example usage: ```python >>> from doctr.io import DocumentFile >>> from doctr.models import ocr_predictor, from_hub >>> img = DocumentFile.from_images(['<image_path>']) >>> # Load your model from the hub >>> model = from_hub('mindee/my-model') >>> # Pass it to the predictor >>> # If your model is a recognition model: >>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large', >>> reco_arch=model, >>> pretrained=True) >>> # If your model is a detection model: >>> predictor = ocr_predictor(det_arch=model, >>> reco_arch='crnn_mobilenet_v3_small', >>> pretrained=True) >>> # Get your predictions >>> res = predictor(img) ```
AdapterHub/bert-base-uncased-pf-drop
[ "bert", "en", "dataset:drop", "arxiv:2104.08247", "adapter-transformers", "question-answering" ]
question-answering
{ "architectures": null, "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2022-04-14T08:54:15Z
--- language: en --- <p align="center"> <img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%"> </p> **Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch** ## Task: detection https://github.com/mindee/doctr ### Example usage: ```python >>> from doctr.io import DocumentFile >>> from doctr.models import ocr_predictor, from_hub >>> img = DocumentFile.from_images(['<image_path>']) >>> # Load your model from the hub >>> model = from_hub('mindee/my-model') >>> # Pass it to the predictor >>> # If your model is a recognition model: >>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large', >>> reco_arch=model, >>> pretrained=True) >>> # If your model is a detection model: >>> predictor = ocr_predictor(det_arch=model, >>> reco_arch='crnn_mobilenet_v3_small', >>> pretrained=True) >>> # Get your predictions >>> res = predictor(img) ```
AdapterHub/roberta-base-pf-social_i_qa
[ "roberta", "en", "dataset:social_i_qa", "arxiv:2104.08247", "adapter-transformers" ]
null
{ "architectures": null, "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- language: - mt datasets: - MLRS/korpus_malti model-index: - name: mBERTu results: - task: type: dependency-parsing name: Dependency Parsing dataset: type: universal_dependencies args: mt_mudt name: Maltese Universal Dependencies Treebank (MUDT) metrics: - type: uas value: 92.10 name: Unlabelled Attachment Score - type: las value: 87.87 name: Labelled Attachment Score - task: type: part-of-speech-tagging name: Part-of-Speech Tagging dataset: type: mlrs_pos name: MLRS POS dataset metrics: - type: accuracy value: 98.66 name: UPOS Accuracy args: upos - type: accuracy value: 98.58 name: XPOS Accuracy args: xpos - task: type: named-entity-recognition name: Named Entity Recognition dataset: type: wikiann name: WikiAnn (Maltese) args: mt metrics: - type: f1 args: span value: 86.60 name: Span-based F1 - task: type: sentiment-analysis name: Sentiment Analysis dataset: type: mt-sentiment-analysis name: Maltese Sentiment Analysis Dataset metrics: - type: f1 args: macro value: 76.79 name: Macro-averaged F1 license: cc-by-nc-sa-4.0 widget: - text: "Malta huwa pajjiż fl-[MASK]." --- # mBERTu A Maltese multilingual model pre-trained on the Korpus Malti v4.0 using multilingual BERT as the initial checkpoint. ## License This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. Permissions beyond the scope of this license may be available at [https://mlrs.research.um.edu.mt/](https://mlrs.research.um.edu.mt/). [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png ## Citation This work was first presented in [Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese](https://aclanthology.org/2022.deeplo-1.10/). Cite it as follows: ```bibtex @inproceedings{BERTu, title = "Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and {BERT} Models for {M}altese", author = "Micallef, Kurt and Gatt, Albert and Tanti, Marc and van der Plas, Lonneke and Borg, Claudia", booktitle = "Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing", month = jul, year = "2022", address = "Hybrid", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.deeplo-1.10", doi = "10.18653/v1/2022.deeplo-1.10", pages = "90--101", } ```
AdapterHub/roberta-base-pf-squad
[ "roberta", "en", "dataset:squad", "arxiv:2104.08247", "adapter-transformers", "question-answering", "adapterhub:qa/squad1" ]
question-answering
{ "architectures": null, "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - conversational --- # My Awesome Model that talks like Rick but thinks that your name is Morty
Adielcane/Adiel
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - autotrain - tabular - classification - structured-data-classification datasets: - huggingface/autotrain-data-spaceship-titanic co2_eq_emissions: 0.04601722024291126 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 742422653 - CO2 Emissions (in grams): 0.04601722024291126 ## Validation Metrics - Loss: 0.4551127191306156 - Accuracy: 0.7760968229954615 - Precision: 0.751358695652174 - Recall: 0.8303303303303303 - AUC: 0.8649197978466272 - F1: 0.7888730385164051 ## Usage ```python import json import joblib model = joblib.load('model.joblib') config = json.load(open('config.json')) features = config['features'] # data = pd.read_csv("data.csv") data = data[features] predictions = model.predict(data) # or model.predict_proba(data) ```
Adielcane/Adielcane
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - autotrain - tabular - classification - structured-data-classification datasets: - huggingface/autotrain-data-spaceship-titanic co2_eq_emissions: 0.21868125228022106 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 742422655 - CO2 Emissions (in grams): 0.21868125228022106 ## Validation Metrics - Loss: 0.4565194132591523 - Accuracy: 0.7776096822995462 - Precision: 0.7743362831858407 - Recall: 0.7882882882882883 - AUC: 0.8650445414927123 - F1: 0.78125 ## Usage ```python import json import joblib model = joblib.load('model.joblib') config = json.load(open('config.json')) features = config['features'] # data = pd.read_csv("data.csv") data = data[features] predictions = model.predict(data) # or model.predict_proba(data) ```
AnonymousSub/SR_rule_based_roberta_bert_triplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- language: es license: mit widget: - text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `w-m-vote-strict-epoch-4` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'w-m-vote-strict-epoch-4' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judíos controlan el mundo' ] print(pipe(texts)) # [{'label': 'racist', 'score': 0.9834708571434021}, {'label': 'non-racist', 'score': 0.995682954788208}] ``` For more details, see https://github.com/preyero/neatclass22
AnonymousSub/rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Krishadow/biobert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Krishadow/biobert-finetuned-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0450 - Validation Loss: 0.0593 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 678, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1443 | 0.0597 | 0 | | 0.0450 | 0.0593 | 1 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- language: en thumbnail: http://www.huggingtweets.com/discord/1668308516202/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1587494876320448512/XH7swTWQ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Discord</div> <div style="text-align: center; font-size: 14px;">@discord</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Discord. | Data | Discord | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 0 | | Short tweets | 85 | | Tweets kept | 3165 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/h46ojex3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @discord's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3bd5uy64) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3bd5uy64/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/discord') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_1_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2022-04-16T15:00:34Z
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - crcb/autotrain-data-go_emo co2_eq_emissions: 31.11935827749309 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 748922872 - CO2 Emissions (in grams): 31.11935827749309 ## Validation Metrics - Loss: 0.17039568722248077 - Accuracy: 0.93625 - Macro F1: 0.9075787460059076 - Micro F1: 0.93625 - Weighted F1: 0.9371621543264445 - Macro Precision: 0.8945117620407296 - Micro Precision: 0.93625 - Weighted Precision: 0.9433589433926076 - Macro Recall: 0.9323604226458176 - Micro Recall: 0.93625 - Weighted Recall: 0.93625 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/crcb/autotrain-go_emo-748922872 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("crcb/autotrain-go_emo-748922872", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("crcb/autotrain-go_emo-748922872", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_1_wikiqa
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
24
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: yfu2307/bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # yfu2307/bert-finetuned-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0435 - Validation Loss: 0.1205 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1695, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.2953 | 0.1935 | 0 | | 0.1323 | 0.1321 | 1 | | 0.0810 | 0.1176 | 2 | | 0.0565 | 0.1177 | 3 | | 0.0435 | 0.1205 | 4 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- language: id tags: - bart - id license: mit --- # Indonesia Recipe Ingredients Generator Model **WARNING: inference on Huggingface might not run since the tokenizer used is not transformers's tokenizer.** Feel free to test the model [in this space](https://huggingface.co/spaces/haryoaw/id-recigen) 😎 **Have fun on generating ingredients** 😎 This is a fine-tuned model to generate the Indonesian food ingredients. One of my personal project that I did in my free time. Basically, you give the name of the food and it will produce the ingredients of the food. ## Model Data: [Indonesian Recipe Data on Kaggle](https://www.kaggle.com/datasets/canggih/indonesian-food-recipes) Pre-trained Model: [IndoBART-v2](https://huggingface.co/indobenchmark/indobart-v2) ## How to use We will specify the usage of the tokenizer and the model. ### Tokenizer Since we use `indobart-v2`, we need to use their tokenizer. First, install the tokenizer by doing `pip install indobenchmark-toolkit`. After that, you can load the tokenizer: ```python from indobenchmark.tokenization_indonlg import IndoNLGTokenizer tokenizer = IndoNLGTokenizer.from_pretrained("haryoaw/id-recigen-bart") ``` **EDIT**: Seems like the tokenizer in the package is not the same as the one that I use to finetune the model. There are some noticeable bug such as some subword tokens are not considered as subword. Nevertheless, it stil works! ### Model The model can be loaded by using AutoModel. ```python from transformers import AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("haryoaw/id-recigen-bart") ``` ## Input Example Make sure to input a **LOWERCASE** food name. The tokenizer is case-sensitive! ``` sayur asam ``` ``` nasi goreng ayam ``` ~To be continued..
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: annaeze/lab9_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # annaeze/lab9_2 This model is a fine-tuned version of [annaeze/lab9_1](https://huggingface.co/annaeze/lab9_1) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0642 - Validation Loss: 0.0854 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 669, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.3518 | 0.1309 | 0 | | 0.0959 | 0.1059 | 1 | | 0.0642 | 0.0854 | 2 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1_wikiqa
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
23
null
--- tags: - generated_from_keras_callback model-index: - name: satwiksstp/bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # satwiksstp/bert-finetuned-ner This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0354 - Validation Loss: 0.0597 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 888, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1204 | 0.0699 | 0 | | 0.0568 | 0.0589 | 1 | | 0.0354 | 0.0597 | 2 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
AnonymousSub/rule_based_twostage_quadruplet_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: cwan6830/bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # cwan6830/bert-finetuned-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0247 - Validation Loss: 0.0564 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1017, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1226 | 0.0579 | 0 | | 0.0396 | 0.0514 | 1 | | 0.0247 | 0.0564 | 2 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
AnonymousSub/rule_based_twostagequadruplet_hier_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: AdwayK/hugging_face_biobert_MLMAv3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # AdwayK/hugging_face_biobert_MLMAv3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0103 - Validation Loss: 0.0861 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1695, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1217 | 0.0687 | 0 | | 0.0405 | 0.0659 | 1 | | 0.0225 | 0.0697 | 2 | | 0.0147 | 0.0830 | 3 | | 0.0103 | 0.0861 | 4 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
AnonymousSub/rule_based_twostagequadruplet_hier_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: osl-3.0 --- # trained using OSCAR dataset vocab size 50000
AnonymousSub/rule_based_twostagetriplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- tags: - generated_from_keras_callback model-index: - name: AdwayK/biobert_on_ADR_as_NER results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # AdwayK/biobert_on_ADR_as_NER This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0413 - Validation Loss: 0.0811 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 975, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.4113 | 0.1466 | 0 | | 0.1128 | 0.0915 | 1 | | 0.0708 | 0.0835 | 2 | | 0.0510 | 0.0800 | 3 | | 0.0413 | 0.0811 | 4 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
AnonymousSub/unsup-consert-base
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: mit tags: - generated_from_trainer model-index: - name: roberta-finetuned-stance-assertive-hillary results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-finetuned-stance-assertive-hillary This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
Anthos23/FS-distilroberta-fine-tuned
[ "pytorch", "roberta", "text-classification", "transformers", "has_space" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: jiaxin97/bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # jiaxin97/bert-finetuned-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0243 - Validation Loss: 0.0595 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1017, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1290 | 0.0578 | 0 | | 0.0398 | 0.0583 | 1 | | 0.0243 | 0.0595 | 2 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Atiqah/Atiqah
[ "license:artistic-2.0" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-04-17T11:34:20Z
--- library_name : pytorch tags: - huggan - diffusion - text-to-image datasets: - huggan/wikiart task: conditional-image-generation license: mit --- # Distill CLOOB-conditioned Latent Diffusion trained on WikiArt ## Model description This is a smaller version of [this model](https://huggingface.co/huggan/ccld_wa), which is a cloob-conditioned latent diffusion model fine-tuned on the [WikiArt dataset](https://huggingface.co/datasets/huggan/wikiart), reducing the latent diffusion model size from 1.2B parameters to 105M parameters with a knowledge distillation method. [CLOOB](https://ml-jku.github.io/cloob/) is a model that encodes images and texts in an unified latent space, similar to what OpenAI's CLIP does. The latent diffusion model takes a CLOOB-encoded latent vector as a condition, this can be from a pompt or an image. ## Intended uses & limitations The latent diffusion model is the only difference with [the teacher model](https://huggingface.co/huggan/ccld_wa), the autoencoder was not changed, nor the CLOOB model. So these are not provided in this repository. model_student.ckpt: The latent diffusion model checkpoint #### How to use You need some dependencies from multiple repositories linked in this repository : [CLOOB latent diffusion](https://github.com/JD-P/cloob-latent-diffusion) : * [CLIP](https://github.com/openai/CLIP/tree/40f5484c1c74edd83cb9cf687c6ab92b28d8b656) * [CLOOB](https://github.com/crowsonkb/cloob-training/tree/136ca7dd69a03eeb6ad525da991d5d7083e44055) : the model to encode images and texts in an unified latent space, used for conditioning the latent diffusion. * [Latent Diffusion](https://github.com/CompVis/latent-diffusion/tree/f13bf9bf463d95b5a16aeadd2b02abde31f769f8) : latent diffusion model definition * [Taming transformers](https://github.com/CompVis/taming-transformers/tree/24268930bf1dce879235a7fddd0b2355b84d7ea6) : a pretrained convolutional VQGAN is used as an autoencoder to go from image space to the latent space in which the diffusion is done. * [v-diffusion](https://github.com/crowsonkb/v-diffusion-pytorch/tree/ffabbb1a897541fa2a3d034f397c224489d97b39) : contains some functions for sampling using a diffusion model with text and/or image prompts. An example code to use the model to sample images from a text prompt can be seen in a [Colab Notebook](https://colab.research.google.com/drive/1XGHdO8IAGajnpb-x4aOb-OMYfZf0WDTi?usp=sharing), or directly in the [app source code](https://huggingface.co/spaces/huggan/wikiart-diffusion-mini/blob/main/app.py) for the Gradio demo on [this Space](https://huggingface.co/spaces/huggan/wikiart-diffusion-mini) #### Limitations and bias The student latent diffusion model was trained only on images from the WikiArt dataset, but the VQGAN autoencoder, the CLOOB model and the teacher latent diffusion model all come from pretrained checkpoints and were trained on images and texts from the internet. According to the [Latent Diffusion paper](https://arxiv.org/abs/2112.10752): “Deep learning modules tend to reproduce or exacerbate biases that are already present in the data”. ## Training data This model was trained on the [WikiArt dataset](https://huggingface.co/datasets/huggan/wikiart) only. Only the images were used during training, no text prompt, so we did not use the information of style/genre/artist. ## Training procedure This latent diffusion model was trained with a Knowledge Distillation process with [huggan/ccld_wa](https://huggingface.co/huggan/ccld_wa) as a teacher model. Training of the teacher model largely followed the guidelines in [JD-P's github repo](https://github.com/JD-P/cloob-latent-diffusion). The model was fine-tuned on the Wikiart dataset for ~12 hours on 2 A6000 GPUs kindly provided by Paperspace. The knowledge distillation process was done on the WikiArt dataset as well. The training of the student model took 17 hours on 1 A6000 GPU provided by Paperspace. [Here](https://wandb.ai/gigant/distill-ccld/reports/Distill-Diffusion-105M--VmlldzoxODQwMTUz) is the `wandb` report for this training. ### Links * [Model card for the teacher model on HuggingFace](https://huggingface.co/huggan/ccld_wa), trained by Jonathan Whitaker. He described the model and training procedure on his [blog post](https://datasciencecastnet.home.blog/2022/04/12/fine-tuning-a-cloob-conditioned-latent-diffusion-model-on-wikiart/) * [Model card for the student model on HuggingFace](https://huggingface.co/huggan/distill-ccld-wa), trained by me. You can check my [WandB report](https://wandb.ai/gigant/distill-ccld/reports/Distill-Diffusion-105M--VmlldzoxODQwMTUz?accessToken=mfbrz1ghfakmh01lybsuycwm3qj3isv60uynnvmina3tiwz5e5ufbjui5xqhmaqi). This version has 105M parameters, against 1.2B parameters for the teacher version. It is lighter, and allows for faster inference, while maintaining some of the original model capability at generating paintings from prompts. * [Gradio demo app on HuggingFace's Spaces](https://huggingface.co/spaces/huggan/wikiart-diffusion-mini) to try out the model with an online demo app * [iPython Notebook](https://github.com/giganttheo/distill-ccld/blob/master/distillCCLD_(Wikiart)_demo.ipynb) to use the model in Python * [WikiArt dataset on `datasets` hub](https://huggingface.co/datasets/huggan/wikiart) * [GitHub repository](https://github.com/giganttheo/distill-ccld)
Ayham/bert_gpt2_summarization_cnndm_new
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - gpt2 - text-generation - music-modeling - music-generation widget: - text: PIECE_START - text: PIECE_START PIECE_START TRACK_START INST=34 DENSITY=8 - text: PIECE_START TRACK_START INST=1 --- # GPT-2 for Music Language Models such as GPT-2 can be used for Music Generation. The idea is to represent pieces of music as texts, effectively reducing the task to Language Generation. This model is a rather small instance of GPT-2 trained the [Lakhclean dataset](https://colinraffel.com/projects/lmd/). The model generates 4 bars at a time at a 16th note resolution with 4/4 meter. If you want to contribute, if you want to say hello, if you want to know more, find me here: - https://www.linkedin.com/in/dr-tristan-behrens-734967a2/ - https://www.youtube.com/@drtristanbehrens - https://twitter.com/DrTBehrens - https://github.com/AI-Guru - https://huggingface.co/TristanBehrens - https://huggingface.co/ai-guru Run the model on Google Colab: https://colab.research.google.com/drive/1Mz-KJ8vX4Wylr4mzvgP-MclDwQJ06KSq?usp=sharing ## License You are free to use this model in any open-source context without charge. If you do so, please credit me. However, if you wish to use the model for commercial purposes, please contact me to discuss licensing terms. Depending on the specific use case, there may be fees associated with commercial use. I am open to negotiating the terms of the license to meet your needs and ensure that the model is used appropriately. Please feel free to reach out to me at your earliest convenience to discuss further. ## Model description The model is GPT-2 with 6 decoders and 8 attention heads each. The context length is 2048. The embedding dimensions are 512. ## Model family This model is part of a huge group of Transformers I have trained. Most of them are not publicly available. If you are interested in using andor licensing one of the models, please get in touch. ### Lakhclean These models were trained on roundabout 15K MIDI files (the same as the model you are viewing now) from the Lakhclean dataset. - lakhclean_mmmbar_4bars_d-2048: 4 bars resolution, bar inpainting, note density conditioning - lakhclean_mmmbar_8bars_d-2048: 8 bars resolution, bar inpainting, note density conditioning - lakhclean_mmmtrack_4bars_chords: 4 bars resolution, chord conditioning - lakhclean_mmmtrack_4bars_d-2048: 4 bars resolution, note density conditioning (this model) - lakhclean_mmmtrack_4bars_simple-2048: 4 bars resolution - lakhclean_mmmtrack_8bars_d-2048: 8 bars resolution, note density conditioning ### Lakhfull These models were trained on roundabout 175K MIDI files from the Lakh dataset. - lakhfull_mmmtrack_4bars_d-2048: 4 bars resolution, note density conditioning (the big brother of this model) - lakhfull_mmmtrack_4bars_simple-2048: 4 bars resolution ### Metal These models were trained on roundabout 7K MIDI files from my own collections. They contain genre conditioning. - metal_mmmbar_4bars_d-2048: 4 bars resolution, bar inpainting, note density conditioning - metal_mmmbar_8bars_d-2048: 8 bars resolution, bar inpainting, note density conditioning - metal_mmmtrack_4bars_d-2048: 4 bars resolution, note density conditioning - metal_mmmtrack_8bars_d-2048: 8 bars resolution, note density conditioning ### MetaMIDI Dataset genres These models were trained on genre-specific subsets of the MetaMIDI dataset. - mmd-baroque_mmmtrack_4bars_d-2048: 4 bars resolution, note density conditioning - mmd-baroque_mmmtrack_8bars_d-2048: 8 bars resolution, note density conditioning - mmd-classical_mmmtrack_8bars_d-2048: 8 bars resolution, note density conditioning - mmd-noncontemporary_mmmtrack_8bars_d-2048: 8 bars resolution, note density conditioning - mmd-pop_mmmtrack_8bars_d-2048: 8 bars resolution, note density conditioning - mmd-renaissance_mmmtrack_8bars_d-2048: 8 bars resolution, note density conditioning ### MetaMIDI Dataset full These models were trained on roundabout 400K MIDI files from the MetaMIDI dataset. - mmd-full_mmmtrack_4bars_d-2048: 4 bars resolution, note density conditioning - mmd-full_mmmtrack_8bars_d-2048: 8 bars resolution, note density conditioning - mmd-full_mmmtrack_4bars_chords-d-2048: 4 bars resolution, note density conditioning, chord conditioning (most powerful model in the entire group) ## Intended uses & limitations This model is just a proof of concept. It shows that HuggingFace can be used to compose music. ### How to use There is a notebook in the repo that you can use to generate symbolic music and then render it. ### Limitations and bias Since this model has been trained on a very small corpus of music, it is overfitting heavily. ### Acknowledgements This model has been created with support from NVIDIA. I am very grateful for the GPU compute they provided!
Ayham/bert_gpt2_summarization_xsum
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:xsum", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: claim-spotter-multilingual results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # claim-spotter-multilingual This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3285 - F1: 0.7996 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5098 | 1.0 | 830 | 0.3507 | 0.7779 | | 0.3577 | 2.0 | 1660 | 0.3285 | 0.7996 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
Ayham/bertgpt2_cnn
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln37") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln37") ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` Keywords to sentences or sentence.
Ayham/ernie_gpt2_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- license: apache-2.0 tags: - gan - pggan - huggan - unconditional-image-generation --- The model provided is a PGGAN generator trained on the celebahq dataset with a resolution of 1024px. It is uploaded as part of porting this project: https://github.com/genforce/sefa to hugginface spaces.
Ayham/xlmroberta_large_gpt2_summarization_cnndm
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
2022-04-17T20:30:49Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: xls-r-300m-bemba-15hrs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xls-r-300m-bemba-15hrs This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2754 - Wer: 0.3481 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.5142 | 0.71 | 400 | 0.5585 | 0.7501 | | 0.6351 | 1.43 | 800 | 0.3185 | 0.5058 | | 0.4892 | 2.15 | 1200 | 0.2813 | 0.4655 | | 0.4021 | 2.86 | 1600 | 0.2539 | 0.4159 | | 0.3505 | 3.58 | 2000 | 0.2411 | 0.4000 | | 0.3045 | 4.29 | 2400 | 0.2512 | 0.3951 | | 0.274 | 5.01 | 2800 | 0.2402 | 0.3922 | | 0.2335 | 5.72 | 3200 | 0.2403 | 0.3764 | | 0.2032 | 6.44 | 3600 | 0.2383 | 0.3657 | | 0.1783 | 7.16 | 4000 | 0.2603 | 0.3518 | | 0.1487 | 7.87 | 4400 | 0.2479 | 0.3577 | | 0.1281 | 8.59 | 4800 | 0.2638 | 0.3518 | | 0.113 | 9.3 | 5200 | 0.2754 | 0.3481 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
Ayham/xlnet_distilgpt2_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- license: other tags: - generated_from_trainer model-index: - name: distilroberta-current results: [] --- # distilroberta-current This model classifies articles as current (covering or discussing current events) or not current (not relating to current events). The model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on a dataset of articles labeled using weak-supervision and manual labeling It achieves the following results on the evaluation set: - Loss: 0.1745 - Acc: 0.9355 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 12345 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 16 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Acc | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 11 | 0.6559 | 0.7097 | | 0.6762 | 2.0 | 22 | 0.5627 | 0.7097 | | 0.5432 | 3.0 | 33 | 0.4606 | 0.7097 | | 0.5432 | 4.0 | 44 | 0.3651 | 0.8065 | | 0.411 | 5.0 | 55 | 0.2512 | 0.9194 | | 0.269 | 6.0 | 66 | 0.2774 | 0.9355 | | 0.269 | 7.0 | 77 | 0.2062 | 0.8710 | | 0.2294 | 8.0 | 88 | 0.2598 | 0.9355 | | 0.1761 | 9.0 | 99 | 0.1745 | 0.9355 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
Ayoola/pytorch_model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-04-18T01:32:22Z
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - crcb/autotrain-data-hate_speech co2_eq_emissions: 5.301132895184483 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 752122994 - CO2 Emissions (in grams): 5.301132895184483 ## Validation Metrics - Loss: 0.7107211351394653 - Accuracy: 0.7529411764705882 - Precision: 0.7502287282708143 - Recall: 0.9177392277560157 - AUC: 0.8358316393336287 - F1: 0.8255726151522779 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/crcb/autotrain-hate_speech-752122994 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("crcb/autotrain-hate_speech-752122994", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("crcb/autotrain-hate_speech-752122994", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
2022-04-18T01:53:31Z
--- tags: - generated_from_trainer model-index: - name: kobigbird-bert-base-finetuned-klue-goorm-q-a-task results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-bert-base-finetuned-klue-goorm-q-a-task This model is a fine-tuned version of [ToToKr/kobigbird-bert-base-finetuned-klue](https://huggingface.co/ToToKr/kobigbird-bert-base-finetuned-klue) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2115 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6159 | 0.09 | 500 | 1.7522 | | 1.554 | 0.17 | 1000 | 1.5953 | | 1.4493 | 0.26 | 1500 | 1.3769 | | 1.4051 | 0.35 | 2000 | 1.3746 | | 1.3251 | 0.43 | 2500 | 1.5049 | | 1.2855 | 0.52 | 3000 | 1.1733 | | 1.2226 | 0.6 | 3500 | 1.1538 | | 1.1907 | 0.69 | 4000 | 1.1470 | | 1.1655 | 0.78 | 4500 | 1.0759 | | 1.1411 | 0.86 | 5000 | 1.0676 | | 1.0752 | 0.95 | 5500 | 0.9894 | | 0.9389 | 1.04 | 6000 | 1.2020 | | 0.8457 | 1.12 | 6500 | 1.1004 | | 0.7977 | 1.21 | 7000 | 1.1397 | | 0.818 | 1.29 | 7500 | 1.2960 | | 0.8142 | 1.38 | 8000 | 1.2115 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
Ayran/DialoGPT-small-harry-potter-1-through-3
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- language: ko widget: - text: "코딩을 🐶🍾👟같이 하니까 맨날 장애나잖아 이 🧑‍🦽아" datasets: - jason9693/APEACH ---
Ayu/Shiriro
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln38") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln38") ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` Keywords to sentences or sentence.
AyushPJ/ai-club-inductions-21-nlp-ALBERT
[ "pytorch", "albert", "question-answering", "transformers", "generated_from_trainer", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9215 - name: F1 type: f1 value: 0.9215748499839705 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2302 - Accuracy: 0.9215 - F1: 0.9216 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8775 | 1.0 | 250 | 0.3501 | 0.894 | 0.8871 | | 0.2658 | 2.0 | 500 | 0.2302 | 0.9215 | 0.9216 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
BME-TMIT/foszt2oszt
[ "pytorch", "encoder-decoder", "text2text-generation", "hu", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
null
--- tags: - conversational --- #Michael Scott Chatbot
BOON/electra_qa
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en thumbnail: http://www.huggingtweets.com/tojibawhiteroom/1650256419756/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1509337156787003394/WjOdf_-m_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Tojiba White Room (T__T).1</div> <div style="text-align: center; font-size: 14px;">@tojibawhiteroom</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Tojiba White Room (T__T).1. | Data | Tojiba White Room (T__T).1 | | --- | --- | | Tweets downloaded | 212 | | Retweets | 0 | | Short tweets | 26 | | Tweets kept | 186 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1okoxv9l/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tojibawhiteroom's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1jqxicud) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1jqxicud/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/tojibawhiteroom') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
BSC-LT/gpt2-large-bne
[ "pytorch", "gpt2", "text-generation", "es", "dataset:bne", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "license:apache-2.0" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: facility-classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # facility-classifier This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4422 - Accuracy: 0.7872 - F1: 0.7854 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.671 | 1.0 | 12 | 0.6529 | 0.6596 | 0.6441 | | 0.5845 | 2.0 | 24 | 0.5722 | 0.7447 | 0.7461 | | 0.4902 | 3.0 | 36 | 0.5091 | 0.7447 | 0.7461 | | 0.378 | 4.0 | 48 | 0.4797 | 0.7660 | 0.7670 | | 0.354 | 5.0 | 60 | 0.4487 | 0.8085 | 0.8029 | | 0.2865 | 6.0 | 72 | 0.4422 | 0.7872 | 0.7854 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
BSC-LT/roberta-base-bne
[ "pytorch", "roberta", "fill-mask", "es", "dataset:bne", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
594
2022-04-18T05:51:49Z
--- inference: false pipeline_tag: sentence-similarity language: - bg license: mit datasets: - oscar - chitanka - wikipedia tags: - torch --- # ROBERTA BASE (cased) trained on private Bulgarian-English parallel data This is a Multilingual Roberta model. It could be used for creating embeddings of Bulgarian sentences. Using the ideas from [Sentence-BERT](https://arxiv.org/abs/2004.09813), the training is based on the idea that a translated sentence should be mapped to the same location in the vector space as the original sentence. This model is cased: it does make a difference between bulgarian and Bulgarian. It was trained on private Bulgarian-English parallel data. Then, it was compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925). ### How to use Here is how to use this model in PyTorch: ```python >>> import scipy >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> >>> model = AutoModel.from_pretrained('rmihaylov/roberta-base-nli-stsb-theseus-bg') >>> tokenizer = AutoTokenizer.from_pretrained('rmihaylov/roberta-base-nli-stsb-theseus-bg') >>> >>> def embed(text): >>> inputs = tokenizer.encode_plus(text, return_tensors='pt') >>> outputs = model(**inputs) >>> sequence_output = outputs[0] >>> input_mask_expanded = inputs['attention_mask'].unsqueeze(-1).expand(sequence_output.size()).float() >>> embeddings = torch.sum(sequence_output * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) >>> return embeddings.detach().numpy()[0] >>> >>> >>> query_embedding = embed("Какви са съставките на бисквитките?") >>> >>> questions = [ >>> "Какво е бисквитка?", >>> "От какво са направени бисквитките?", >>> "Използват ли в Англия думата бисквитки?", >>> "Къде се правят бисквитките?", >>> "Какви видове бисквитки има?", >>> "Къде човек може да купи бисквитки?", >>> "Откъде дойде думата бисквитка?", >>> "Кое е чудовището на бисквитките?", >>> "Как да си направите бисквитки у дома?", >>> "Колко калории има типичната бисквитка?", >>> "Какви напитки вървят добре с бисквитките?", >>> "Бисквитките наричат ли се също сладки?" >>> ] >>> >>> corpus, corpus_embeddings = [], [] >>> for question in questions: >>> embedding = embed(question) >>> corpus.append(question) >>> corpus_embeddings.append(embedding) >>> >>> distances = scipy.spatial.distance.cdist([query_embedding], corpus_embeddings, "cosine")[0] >>> >>> results = zip(range(len(distances)), distances) >>> results = sorted(results, key=lambda x: x[1]) >>> >>> print([[corpus[idx].strip(), (1.0 - distance)] for idx, distance in results]) [['От какво са направени бисквитките?', 0.9855158537034977], ['Къде се правят бисквитките?', 0.9774093134195002], ['Какви видове бисквитки има?', 0.9766014240577192], ['Използват ли в Англия думата бисквитки?', 0.9446492058523037], ['Кое е чудовището на бисквитките?', 0.9269786184641834], ['Къде човек може да купи бисквитки?', 0.9268900421152592], ['Какво е бисквитка?', 0.9188155080718263], ['Бисквитките наричат ли се също сладки?', 0.9060368627614406], ['Откъде дойде думата бисквитка?', 0.9048309659657036], ['Какви напитки вървят добре с бисквитките?', 0.890836765118977], ['Как да си направите бисквитки у дома?', 0.8878968487540497], ['Колко калории има типичната бисквитка?', 0.8652821650136402]] ```
BSC-LT/roberta-large-bne
[ "pytorch", "roberta", "fill-mask", "es", "dataset:bne", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
24
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.922 - name: F1 type: f1 value: 0.9222074564200887 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2170 - Accuracy: 0.922 - F1: 0.9222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8116 | 1.0 | 250 | 0.3076 | 0.9035 | 0.9013 | | 0.2426 | 2.0 | 500 | 0.2170 | 0.922 | 0.9222 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu102 - Datasets 2.0.0 - Tokenizers 0.12.1
BSen/wav2vec2-base-timit-demo-colab
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2022-04-18T06:53:05Z
--- language: en license: cc-by-nc-sa-4.0 --- # LayoutLMv3 [Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://aka.ms/layoutlmv3) ## Model description LayoutLMv3 is a pre-trained multimodal Transformer for Document AI with unified text and image masking. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model. For example, LayoutLMv3 can be fine-tuned for both text-centric tasks, including form understanding, receipt understanding, and document visual question answering, and image-centric tasks such as document image classification and document layout analysis. [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei, ACM Multimedia 2022. ## Citation If you find LayoutLM useful in your research, please cite the following paper: ``` @inproceedings{huang2022layoutlmv3, author={Yupan Huang and Tengchao Lv and Lei Cui and Yutong Lu and Furu Wei}, title={LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking}, booktitle={Proceedings of the 30th ACM International Conference on Multimedia}, year={2022} } ``` ## License The content of this project itself is licensed under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Portions of the source code are based on the [transformers](https://github.com/huggingface/transformers) project. [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct)
BSen/wav2vec2-large-xls-r-300m-turkish-colab
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- language: en license: cc-by-nc-sa-4.0 --- # LayoutLMv3 [Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://aka.ms/layoutlmv3) ## Model description LayoutLMv3 is a pre-trained multimodal Transformer for Document AI with unified text and image masking. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model. For example, LayoutLMv3 can be fine-tuned for both text-centric tasks, including form understanding, receipt understanding, and document visual question answering, and image-centric tasks such as document image classification and document layout analysis. [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei, Preprint 2022. ## Citation If you find LayoutLM useful in your research, please cite the following paper: ``` @inproceedings{huang2022layoutlmv3, author={Yupan Huang and Tengchao Lv and Lei Cui and Yutong Lu and Furu Wei}, title={LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking}, booktitle={Proceedings of the 30th ACM International Conference on Multimedia}, year={2022} } ``` ## License The content of this project itself is licensed under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Portions of the source code are based on the [transformers](https://github.com/huggingface/transformers) project. [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct)
BW/TEST
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
2022-04-18T07:02:39Z
--- inference: false pipeline_tag: sentence-similarity language: - bg license: mit datasets: - oscar - chitanka - wikipedia tags: - torch --- # ROBERTA BASE (cased) trained on private Bulgarian-English parallel data This is a Multilingual Roberta model. It could be used for creating embeddings of Bulgarian sentences. Using the ideas from [Sentence-BERT](https://arxiv.org/abs/2004.09813), the training is based on the idea that a translated sentence should be mapped to the same location in the vector space as the original sentence. This model is cased: it does make a difference between bulgarian and Bulgarian. It was trained on private Bulgarian-English parallel data. ### How to use Here is how to use this model in PyTorch: ```python >>> import scipy >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> >>> model = AutoModel.from_pretrained('rmihaylov/roberta-base-nli-stsb-bg') >>> tokenizer = AutoTokenizer.from_pretrained('rmihaylov/roberta-base-nli-stsb-bg') >>> >>> def embed(text): >>> inputs = tokenizer.encode_plus(text, return_tensors='pt') >>> outputs = model(**inputs) >>> sequence_output = outputs[0] >>> input_mask_expanded = inputs['attention_mask'].unsqueeze(-1).expand(sequence_output.size()).float() >>> embeddings = torch.sum(sequence_output * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) >>> return embeddings.detach().numpy()[0] >>> >>> >>> query_embedding = embed("Какви са съставките на бисквитките?") >>> >>> questions = [ >>> "Какво е бисквитка?", >>> "От какво са направени бисквитките?", >>> "Използват ли в Англия думата бисквитки?", >>> "Къде се правят бисквитките?", >>> "Какви видове бисквитки има?", >>> "Къде човек може да купи бисквитки?", >>> "Откъде дойде думата бисквитка?", >>> "Кое е чудовището на бисквитките?", >>> "Как да си направите бисквитки у дома?", >>> "Колко калории има типичната бисквитка?", >>> "Какви напитки вървят добре с бисквитките?", >>> "Бисквитките наричат ли се също сладки?" >>> ] >>> >>> corpus, corpus_embeddings = [], [] >>> for question in questions: >>> embedding = embed(question) >>> corpus.append(question) >>> corpus_embeddings.append(embedding) >>> >>> distances = scipy.spatial.distance.cdist([query_embedding], corpus_embeddings, "cosine")[0] >>> >>> results = zip(range(len(distances)), distances) >>> results = sorted(results, key=lambda x: x[1]) >>> >>> print([[corpus[idx].strip(), (1.0 - distance)] for idx, distance in results]) [['Какви видове бисквитки има?', 0.9749538412820795], ['От какво са направени бисквитките?', 0.9720467855849998], ['Къде се правят бисквитките?', 0.9622582076645853], ['Какво е бисквитка?', 0.9352896865855094], ['Използват ли в Англия думата бисквитки?', 0.8981422328370646], ['Откъде дойде думата бисквитка?', 0.8955433698658758], ['Кое е чудовището на бисквитките?', 0.8902666858687854], ['Бисквитките наричат ли се също сладки?', 0.8839303534407483], ['Какви напитки вървят добре с бисквитките?', 0.8582087653310524], ['Къде човек може да купи бисквитки?', 0.8570532540073935], ['Колко калории има типичната бисквитка?', 0.8387529949080176], ['Как да си направите бисквитки у дома?', 0.8243675958097614]] ```
Babelscape/rebel-large
[ "pytorch", "safetensors", "bart", "text2text-generation", "en", "dataset:Babelscape/rebel-dataset", "transformers", "seq2seq", "relation-extraction", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "has_space" ]
text2text-generation
{ "architectures": [ "BartForConditionalGeneration" ], "model_type": "bart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9,458
null
--- language: en thumbnail: http://www.huggingtweets.com/buckeshot-onlinepete/1662024914888/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1545140847259406337/bTk2lL6O_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/456958582731603969/QZKpv6eI_400x400.jpeg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">BUCKSHOT & im pete online</div> <div style="text-align: center; font-size: 14px;">@buckeshot-onlinepete</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from BUCKSHOT & im pete online. | Data | BUCKSHOT | im pete online | | --- | --- | --- | | Tweets downloaded | 311 | 3190 | | Retweets | 77 | 94 | | Short tweets | 46 | 1003 | | Tweets kept | 188 | 2093 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wyw1egj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @buckeshot-onlinepete's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1bnj1d4d) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1bnj1d4d/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/buckeshot-onlinepete') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Babysittingyoda/DialoGPT-small-familyguy
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
2022-04-18T07:37:39Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: xls-r-300m-bemba-5hrs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xls-r-300m-bemba-5hrs This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3129 - Wer: 0.4430 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4473 | 2.16 | 400 | 0.4687 | 0.6798 | | 0.5882 | 4.32 | 800 | 0.3235 | 0.5089 | | 0.3508 | 6.49 | 1200 | 0.3190 | 0.4695 | | 0.21 | 8.65 | 1600 | 0.3129 | 0.4430 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
Badr/model1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-04-18T07:58:02Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-ar-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-ar-2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4764 - Wer: 0.3073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.0851 | 1.18 | 400 | 0.5614 | 0.4888 | | 0.691 | 2.35 | 800 | 0.6557 | 0.5558 | | 0.6128 | 3.53 | 1200 | 0.5852 | 0.5070 | | 0.543 | 4.71 | 1600 | 0.5591 | 0.4838 | | 0.5185 | 5.88 | 2000 | 0.6649 | 0.5514 | | 0.4816 | 7.06 | 2400 | 0.5598 | 0.4689 | | 0.4336 | 8.24 | 2800 | 0.5384 | 0.4515 | | 0.405 | 9.41 | 3200 | 0.4987 | 0.4138 | | 0.3811 | 10.59 | 3600 | 0.5427 | 0.4644 | | 0.3539 | 11.76 | 4000 | 0.4881 | 0.4159 | | 0.3299 | 12.94 | 4400 | 0.5160 | 0.4198 | | 0.3096 | 14.12 | 4800 | 0.5019 | 0.4077 | | 0.2881 | 15.29 | 5200 | 0.5146 | 0.4140 | | 0.2894 | 16.47 | 5600 | 0.4861 | 0.4026 | | 0.2461 | 17.65 | 6000 | 0.4765 | 0.3742 | | 0.2371 | 18.82 | 6400 | 0.4679 | 0.3672 | | 0.2182 | 20.0 | 6800 | 0.4699 | 0.3603 | | 0.1942 | 21.18 | 7200 | 0.4769 | 0.3519 | | 0.1823 | 22.35 | 7600 | 0.4719 | 0.3497 | | 0.1682 | 23.53 | 8000 | 0.4876 | 0.3456 | | 0.1526 | 24.71 | 8400 | 0.4591 | 0.3300 | | 0.137 | 25.88 | 8800 | 0.4819 | 0.3314 | | 0.1283 | 27.06 | 9200 | 0.4823 | 0.3213 | | 0.1174 | 28.24 | 9600 | 0.4879 | 0.3174 | | 0.1104 | 29.41 | 10000 | 0.4764 | 0.3073 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0 - Datasets 1.18.4 - Tokenizers 0.11.6
Bagus/wav2vec2-xlsr-greek-speech-emotion-recognition
[ "pytorch", "tensorboard", "wav2vec2", "el", "dataset:aesdd", "transformers", "audio", "audio-classification", "speech", "license:apache-2.0" ]
audio-classification
{ "architectures": [ "Wav2Vec2ForSpeechClassification" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
21
null
--- inference: false pipeline_tag: sentence-similarity language: - bg license: mit datasets: - oscar - chitanka - wikipedia tags: - torch --- # ROBERTA BASE (cased) trained on private Bulgarian-English parallel data This is a Multilingual Roberta model. It could be used for creating embeddings of Bulgarian sentences. Using the ideas from [Sentence-BERT](https://arxiv.org/abs/2004.09813), the training is based on the idea that a translated sentence should be mapped to the same location in the vector space as the original sentence. The teacher model is the [USE model by Google](https://aclanthology.org/D18-2029/). This model is cased: it does make a difference between bulgarian and Bulgarian. It was trained on private Bulgarian-English parallel data. ### How to use Here is how to use this model in PyTorch: ```python >>> import scipy >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> >>> model = AutoModel.from_pretrained('rmihaylov/roberta-base-use-qa-bg') >>> tokenizer = AutoTokenizer.from_pretrained('rmihaylov/roberta-base-use-qa-bg') >>> >>> query = "Какви са съставките на бисквитките?" >>> >>> answers = [ >>> "Бисквитката е печена или варена храна, която обикновено е малка, плоска и сладка.", >>> "Бисквитките обикновено съдържат брашно, захар и някакъв вид масло или мазнини. Те могат да включват други съставки като стафиди, овес, шоколадов чипс, ядки и др.", >>> "В повечето англоговорящи страни, с изключение на САЩ и Канада, хрупкавите бисквитки се наричат ​​бисквити.", >>> "Бисквитите Chewier понякога се наричат ​​бисквитки дори в Обединеното кралство. Някои бисквитки могат също да бъдат назовавани според формата им, като квадратчета с дата или барове.", >>> "Бисквитките или бисквитите могат да се произвеждат масово във фабрики, направени в малки пекарни или домашно приготвени.", >>> "Вариантите за бисквити или бисквити включват сандвич бисквити, като крем крем, Jammie Dodgers, Bourbons и Oreos, с пълнеж от ружа или конфитюр и понякога потопени в шоколад или друго сладко покритие.", >>> "Бисквитките често се сервират с напитки като мляко, кафе или чай.", >>> "Фабричните бисквитки се продават в магазини за хранителни стоки, магазини за удобство и автомати.", >>> "Американската употреба произлиза от холандското koekje „малка торта“, което е умалително от „koek“ („торта“), което произлиза от средно холандската дума „koke“.", >>> "Cookie Monster е Muppet в дългогодишното детско телевизионно шоу Sesame Street, който е най-известен с ненаситния си апетит към бисквитките и известните си фрази за ядене, като „Me want cookie!“, „Me eat cookie!“ (или просто „COOKIE!“) и „Om nom nom nom“ (казано през уста, пълна с храна).", >>> "Домашните бисквитки обикновено се правят от тесто, оформено на малки топчета и пуснато върху лист с бисквитки. След това се пекат във фурна за 5 до 15 минути, в зависимост от рецептата. Температурата на фурната варира от 250 до 350 градуса.", >>> "Повечето бисквитки със среден размер, ако са направени със захар, брашно и скъсяване, ще съдържат между 100 и 200 калории.", >>> ] >>> >>> query_embedding = model.question(**tokenizer.encode_plus(query, return_tensors='pt')).detach().numpy()[0] >>> >>> corpus, corpus_embeddings = [], [] >>> for answer in answers: >>> value_inputs = tokenizer.encode_plus(answer, answer, return_tensors='pt') >>> embedding = model.answer(**value_inputs).detach().numpy()[0] >>> corpus.append(answer) >>> corpus_embeddings.append(embedding) >>> >>> distances = scipy.spatial.distance.cdist([query_embedding], corpus_embeddings, "cosine")[0] >>> >>> results = zip(range(len(distances)), distances) >>> results = sorted(results, key=lambda x: x[1]) >>> >>> print([[corpus[idx].strip(), (1.0 - distance)] for idx, distance in results]) [['Бисквитките обикновено съдържат брашно, захар и някакъв вид масло или мазнини. Те могат да включват други съставки като стафиди, овес, шоколадов чипс, ядки и др.', 0.620301064877746], ['Бисквитката е печена или варена храна, която обикновено е малка, плоска и сладка.', 0.5696434424179133], ['Повечето бисквитки със среден размер, ако са направени със захар, брашно и скъсяване, ще съдържат между 100 и 200 калории.', 0.5496458499598336], ['Бисквитките или бисквитите могат да се произвеждат масово във фабрики, направени в малки пекарни или домашно приготвени.', 0.5365738121336622], ['Бисквитите Chewier понякога се наричат \u200b\u200bбисквитки дори в Обединеното кралство. Някои бисквитки могат също да бъдат назовавани според формата им, като квадратчета с дата или барове.', 0.5278547550921155], ['Вариантите за бисквити или бисквити включват сандвич бисквити, като крем крем, Jammie Dodgers, Bourbons и Oreos, с пълнеж от ружа или конфитюр и понякога потопени в шоколад или друго сладко покритие.', 0.5231947553588652], ['Фабричните бисквитки се продават в магазини за хранителни стоки, магазини за удобство и автомати.', 0.5222493948012543], ['В повечето англоговорящи страни, с изключение на САЩ и Канада, хрупкавите бисквитки се наричат \u200b\u200bбисквити.', 0.5185776999549867], ['Домашните бисквитки обикновено се правят от тесто, оформено на малки топчета и пуснато върху лист с бисквитки. След това се пекат във фурна за 5 до 15 минути, в зависимост от рецептата. Температурата на фурната варира от 250 до 350 градуса.', 0.5113299248563532], ['Cookie Monster е Muppet в дългогодишното детско телевизионно шоу Sesame Street, който е най-известен с ненаситния си апетит към бисквитките и известните си фрази за ядене, като „Me want cookie!“, „Me eat cookie!“ (или просто „COOKIE!“) и „Om nom nom nom“ (казано през уста, пълна с храна).', 0.4642001162793412], ['Бисквитките често се сервират с напитки като мляко, кафе или чай.', 0.44902199326988135], ['Американската употреба произлиза от холандското koekje „малка торта“, което е умалително от „koek“ („торта“), което произлиза от средно холандската дума „koke“.', 0.25256183690274214]] ```
BalajiSathesh/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: cc --- # Talking Bot A AI used for the Discord Talking Bot. That's all.
Balgow/prod_desc
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- inference: false pipeline_tag: sentence-similarity language: - bg license: mit datasets: - oscar - chitanka - wikipedia tags: - torch --- # ROBERTA BASE (cased) trained on private Bulgarian-English parallel data This is a Multilingual Roberta model. It could be used for creating embeddings of Bulgarian sentences. Using the ideas from [Sentence-BERT](https://arxiv.org/abs/2004.09813), the training is based on the idea that a translated sentence should be mapped to the same location in the vector space as the original sentence. The teacher model is the [USE model by Google](https://aclanthology.org/D18-2029/). This model is cased: it does make a difference between bulgarian and Bulgarian. It was trained on private Bulgarian-English parallel data. Then, it was compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925). ### How to use Here is how to use this model in PyTorch: ```python >>> import scipy >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> >>> model = AutoModel.from_pretrained('rmihaylov/roberta-base-use-qa-theseus-bg') >>> tokenizer = AutoTokenizer.from_pretrained('rmihaylov/roberta-base-use-qa-theseus-bg') >>> >>> query = "Какви са съставките на бисквитките?" >>> >>> answers = [ >>> "Бисквитката е печена или варена храна, която обикновено е малка, плоска и сладка.", >>> "Бисквитките обикновено съдържат брашно, захар и някакъв вид масло или мазнини. Те могат да включват други съставки като стафиди, овес, шоколадов чипс, ядки и др.", >>> "В повечето англоговорящи страни, с изключение на САЩ и Канада, хрупкавите бисквитки се наричат ​​бисквити.", >>> "Бисквитите Chewier понякога се наричат ​​бисквитки дори в Обединеното кралство. Някои бисквитки могат също да бъдат назовавани според формата им, като квадратчета с дата или барове.", >>> "Бисквитките или бисквитите могат да се произвеждат масово във фабрики, направени в малки пекарни или домашно приготвени.", >>> "Вариантите за бисквити или бисквити включват сандвич бисквити, като крем крем, Jammie Dodgers, Bourbons и Oreos, с пълнеж от ружа или конфитюр и понякога потопени в шоколад или друго сладко покритие.", >>> "Бисквитките често се сервират с напитки като мляко, кафе или чай.", >>> "Фабричните бисквитки се продават в магазини за хранителни стоки, магазини за удобство и автомати.", >>> "Американската употреба произлиза от холандското koekje „малка торта“, което е умалително от „koek“ („торта“), което произлиза от средно холандската дума „koke“.", >>> "Cookie Monster е Muppet в дългогодишното детско телевизионно шоу Sesame Street, който е най-известен с ненаситния си апетит към бисквитките и известните си фрази за ядене, като „Me want cookie!“, „Me eat cookie!“ (или просто „COOKIE!“) и „Om nom nom nom“ (казано през уста, пълна с храна).", >>> "Домашните бисквитки обикновено се правят от тесто, оформено на малки топчета и пуснато върху лист с бисквитки. След това се пекат във фурна за 5 до 15 минути, в зависимост от рецептата. Температурата на фурната варира от 250 до 350 градуса.", >>> "Повечето бисквитки със среден размер, ако са направени със захар, брашно и скъсяване, ще съдържат между 100 и 200 калории.", >>> ] >>> >>> query_embedding = model.question(**tokenizer.encode_plus(query, return_tensors='pt')).detach().numpy()[0] >>> >>> corpus, corpus_embeddings = [], [] >>> for answer in answers: >>> value_inputs = tokenizer.encode_plus(answer, answer, return_tensors='pt') >>> embedding = model.answer(**value_inputs).detach().numpy()[0] >>> corpus.append(answer) >>> corpus_embeddings.append(embedding) >>> >>> distances = scipy.spatial.distance.cdist([query_embedding], corpus_embeddings, "cosine")[0] >>> >>> results = zip(range(len(distances)), distances) >>> results = sorted(results, key=lambda x: x[1]) >>> >>> print([[corpus[idx].strip(), (1.0 - distance)] for idx, distance in results]) [['Бисквитките обикновено съдържат брашно, захар и някакъв вид масло или мазнини. Те могат да включват други съставки като стафиди, овес, шоколадов чипс, ядки и др.', 0.5449754306536151], ['Фабричните бисквитки се продават в магазини за хранителни стоки, магазини за удобство и автомати.', 0.5049509545814316], ['В повечето англоговорящи страни, с изключение на САЩ и Канада, хрупкавите бисквитки се наричат \u200b\u200bбисквити.', 0.5029661338050297], ['Бисквитките или бисквитите могат да се произвеждат масово във фабрики, направени в малки пекарни или домашно приготвени.', 0.4991678233218718], ['Вариантите за бисквити или бисквити включват сандвич бисквити, като крем крем, Jammie Dodgers, Bourbons и Oreos, с пълнеж от ружа или конфитюр и понякога потопени в шоколад или друго сладко покритие.', 0.49050297326146386], ['Повечето бисквитки със среден размер, ако са направени със захар, брашно и скъсяване, ще съдържат между 100 и 200 калории.', 0.48950875441294106], ['Бисквитката е печена или варена храна, която обикновено е малка, плоска и сладка.', 0.48646309549536737], ['Бисквитите Chewier понякога се наричат \u200b\u200bбисквитки дори в Обединеното кралство. Някои бисквитки могат също да бъдат назовавани според формата им, като квадратчета с дата или барове.', 0.4840599482604815], ['Cookie Monster е Muppet в дългогодишното детско телевизионно шоу Sesame Street, който е най-известен с ненаситния си апетит към бисквитките и известните си фрази за ядене, като „Me want cookie!“, „Me eat cookie!“ (или просто „COOKIE!“) и „Om nom nom nom“ (казано през уста, пълна с храна).', 0.45209677893728206], ['Домашните бисквитки обикновено се правят от тесто, оформено на малки топчета и пуснато върху лист с бисквитки. След това се пекат във фурна за 5 до 15 минути, в зависимост от рецептата. Температурата на фурната варира от 250 до 350 градуса.', 0.4511516464302119], ['Бисквитките често се сервират с напитки като мляко, кафе или чай.', 0.42364528401677803], ['Американската употреба произлиза от холандското koekje „малка торта“, което е умалително от „koek“ („торта“), което произлиза от средно холандската дума „koke“.', 0.3267314582662877]] ```
Banshee/LukeSkywalker
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en datasets: - librispeech_asr tags: - speech - audio - automatic-speech-recognition - hf-asr-leaderboard license: apache-2.0 model-index: - name: wav2vec2-conformer-rel-pos-large-960h-ft results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 1.85 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 3.83 --- # Wav2Vec2-Conformer-Large-960h with Relative Position Embeddings Wav2Vec2-Conformer with relative position embeddings, pretrained and **fine-tuned on 960 hours of Librispeech** on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Paper**: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) **Authors**: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171). The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Wav2Vec2ConformerForCTC from datasets import load_dataset import torch # load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rel-pos-large-960h-ft") model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rel-pos-large-960h-ft") # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # tokenize input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **facebook/wav2vec2-conformer-rel-pos-large-960h-ft** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import Wav2Vec2ConformerForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self").to("cuda") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self") def map_to_pred(batch): inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest") input_values = inputs.input_values.to("cuda") attention_mask = inputs.attention_mask.to("cuda") with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, remove_columns=["audio"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | |---|---| | 1.85 | 3.82 |
Banshee/dialoGPT-luke-small
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en license: apache-2.0 tags: - roberta - mutlimodal - exbert inference: false --- # Taiyi-Roberta-124M-D - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM) - Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/) ## 简介 Brief Introduction COCO和VG上特殊预训练的,英文版的MAP(名称暂定)的文本端RoBERTa-base。 Special pre-training on COCO and VG, the textual encoder for MAP (temporary) in English, RoBERTa-base. ## 模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 特殊 Special | 多模态 Multimodal | 太乙 Taiyi | 待定 TBD | 124M | 特殊预训练方法-英文 D-English | ## 模型信息 Model Information 基于Roberta-base,我们使用特殊的训练任务引入一些多模态信息。"D"表示这是一种新的预训练方法。对于特殊的多模态表征,在论文中我们设计了集中不同的训练目标。预训练数据集为MSCOCO和VG。我们的代码和预训练任务的细节将在论文接受后公开。 Based on pre-trained Roberta-base, we apply some multimodal information with special pre-training tasks. "D" implies a special training method. For special multimodal representations, we design several special training objectives in our paper. The pre-training datasets are MSCOCO and VG. Our code and details of pre-training tasks will be made publicly available upon paper acceptance. ### 下游效果 Performance **GLUE** | Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | WNLI | |------------------------|------|------|------|-------|------|-------|------|------|------| | Robert-base (official) | 87.6 | 91.9 | 92.8 | 94.8 | 63.6 | 91.2 | 90.2 | 78.7 | - | | Roberta-base (local) | 87.0 | 91.3 | 92.5 | 94.2 | 62.8 | 90.6 | 92.9 | 78.0 | 56.3 | | Taiyi-Roberta-124M-D (local) | 87.1 | 91.8 | 92.3 | 94.5 | 62.6 | 90.4 | 92.4 | 78.7 | 56.3 | The local test settings are: Sequence length: 128, Batch size: 32, Learning rate: 3e-5 An additional dataset WNLI is tested. ## 使用 Usage ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained("IDEA-CCNL/Taiyi-Roberta-124M-D") model = RobertaModel.from_pretrained("IDEA-CCNL/Taiyi-Roberta-124M-D") ``` ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970): If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970): ```text @article{fengshenbang, author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen}, title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence}, journal = {CoRR}, volume = {abs/2209.02970}, year = {2022} } ``` 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
Banshee/dialoGPT-small-luke
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en datasets: - librispeech_asr tags: - speech - audio - automatic-speech-recognition - hf-asr-leaderboard license: apache-2.0 --- # Wav2Vec2-Conformer-Large-100h with Relative Position Embeddings [Facebook's Wav2Vec2 Conformer (TODO-add link)]() Wav2Vec2 Conformer with relative position embeddings, pretrained on 960h hours of Librispeech and and fine-tuned on **100 hours of Librispeech** on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Paper**: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) **Authors**: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171). The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Wav2Vec2ConformerForCTC from datasets import load_dataset import torch # load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rel-pos-large-100h-ft") model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rel-pos-large-100h-ft") # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # tokenize input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ```
Barleysack/AERoberta2
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- language: en datasets: - librispeech_asr tags: - speech - audio - automatic-speech-recognition - hf-asr-leaderboard license: apache-2.0 model-index: - name: wav2vec2-conformer-rel-pos-large-960h-ft results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 1.96 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 3.98 --- # Wav2Vec2-Conformer-Large-960h with Rotary Position Embeddings Wav2Vec2 Conformer with rotary position embeddings, pretrained and **fine-tuned on 960 hours of Librispeech** on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Paper**: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) **Authors**: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171). The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Wav2Vec2ConformerForCTC from datasets import load_dataset import torch # load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft") model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft") # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # tokenize input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **facebook/wav2vec2-conformer-rope-large-960h-ft** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import Wav2Vec2ConformerForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft").to("cuda") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft") def map_to_pred(batch): inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest") input_values = inputs.input_values.to("cuda") attention_mask = inputs.attention_mask.to("cuda") with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, remove_columns=["audio"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | |---|---| | 1.96 | 3.98 |
Battlehooks/distilbert-base-uncased-finetuned-squad
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: xls-r-300m-bemba-20hrs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xls-r-300m-bemba-20hrs This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2815 - Wer: 0.3435 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.3301 | 0.54 | 400 | 0.5177 | 0.7570 | | 0.6437 | 1.08 | 800 | 0.3580 | 0.5658 | | 0.5149 | 1.61 | 1200 | 0.2953 | 0.5004 | | 0.4547 | 2.15 | 1600 | 0.2701 | 0.4464 | | 0.4084 | 2.69 | 2000 | 0.2743 | 0.4383 | | 0.3606 | 3.23 | 2400 | 0.2482 | 0.3952 | | 0.3227 | 3.76 | 2800 | 0.2461 | 0.3965 | | 0.3025 | 4.3 | 3200 | 0.2484 | 0.4015 | | 0.2697 | 4.84 | 3600 | 0.2357 | 0.3838 | | 0.2443 | 5.38 | 4000 | 0.2385 | 0.3822 | | 0.2287 | 5.91 | 4400 | 0.2353 | 0.3747 | | 0.1977 | 6.45 | 4800 | 0.2337 | 0.3624 | | 0.1895 | 6.99 | 5200 | 0.2319 | 0.3568 | | 0.1561 | 7.53 | 5600 | 0.2540 | 0.3561 | | 0.1448 | 8.06 | 6000 | 0.2772 | 0.3612 | | 0.1221 | 8.6 | 6400 | 0.2755 | 0.3596 | | 0.1133 | 9.14 | 6800 | 0.2733 | 0.3495 | | 0.0969 | 9.68 | 7200 | 0.2815 | 0.3435 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
BigSalmon/MrLincoln3
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
17
2022-04-18T18:30:12Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-xlsr-nepalii results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xlsr-nepalii This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.12.1
BigSalmon/MrLincoln6
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - zainalq7/autotrain-data-NLU_crypto_sentiment_analysis co2_eq_emissions: 0.005300030853867218 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 754123133 - CO2 Emissions (in grams): 0.005300030853867218 ## Validation Metrics - Loss: 0.387116938829422 - Accuracy: 0.8658536585365854 - Macro F1: 0.7724053724053724 - Micro F1: 0.8658536585365854 - Weighted F1: 0.8467166979362101 - Macro Precision: 0.8232219717155155 - Micro Precision: 0.8658536585365854 - Weighted Precision: 0.8516026874759421 - Macro Recall: 0.7642089093701996 - Micro Recall: 0.8658536585365854 - Weighted Recall: 0.8658536585365854 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/zainalq7/autotrain-NLU_crypto_sentiment_analysis-754123133 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("zainalq7/autotrain-NLU_crypto_sentiment_analysis-754123133", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("zainalq7/autotrain-NLU_crypto_sentiment_analysis-754123133", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```