modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-28 18:27:46
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
534 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-28 18:26:08
card
stringlengths
11
1.01M
rtoguchi/t5-small-finetuned-en-to-ro-weight_decay_0.001
rtoguchi
2021-12-02T17:46:55Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wmt16", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: t5-small-finetuned-en-to-ro-weight_decay_0.001 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16 type: wmt16 args: ro-en metrics: - name: Bleu type: bleu value: 7.3524 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-en-to-ro-weight_decay_0.001 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.4509 - Bleu: 7.3524 - Gen Len: 18.2581 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | 0.6488 | 1.0 | 7629 | 1.4509 | 7.3524 | 18.2581 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
tyoyo/t5-base-TEDxJP-11body-0context
tyoyo
2021-12-02T17:37:36Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:te_dx_jp", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: cc-by-sa-4.0 tags: - generated_from_trainer datasets: - te_dx_jp model-index: - name: t5-base-TEDxJP-11body-0context results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-TEDxJP-11body-0context This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset. It achieves the following results on the evaluation set: - Loss: 0.8068 - Wer: 0.1976 - Mer: 0.1904 - Wil: 0.2816 - Wip: 0.7184 - Hits: 602335 - Substitutions: 75050 - Deletions: 39435 - Insertions: 27185 - Cer: 0.1625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:------:|:------:|:-------------:|:---------:|:----------:|:------:| | 0.8909 | 1.0 | 746 | 0.7722 | 0.3120 | 0.2861 | 0.3989 | 0.6011 | 558138 | 99887 | 58795 | 64983 | 0.2652 | | 0.6786 | 2.0 | 1492 | 0.7021 | 0.2226 | 0.2122 | 0.3069 | 0.6931 | 592242 | 78773 | 45805 | 34978 | 0.1862 | | 0.5627 | 3.0 | 2238 | 0.6996 | 0.2104 | 0.2016 | 0.2942 | 0.7058 | 597381 | 76593 | 42846 | 31392 | 0.1752 | | 0.489 | 4.0 | 2984 | 0.7161 | 0.2030 | 0.1952 | 0.2865 | 0.7135 | 599808 | 75155 | 41857 | 28506 | 0.1684 | | 0.4355 | 5.0 | 3730 | 0.7389 | 0.2000 | 0.1924 | 0.2837 | 0.7163 | 601815 | 75247 | 39758 | 28335 | 0.1651 | | 0.3836 | 6.0 | 4476 | 0.7537 | 0.1992 | 0.1918 | 0.2829 | 0.7171 | 601846 | 75046 | 39928 | 27815 | 0.1640 | | 0.3617 | 7.0 | 5222 | 0.7743 | 0.1995 | 0.1918 | 0.2832 | 0.7168 | 602287 | 75268 | 39265 | 28445 | 0.1642 | | 0.3258 | 8.0 | 5968 | 0.7907 | 0.1971 | 0.1899 | 0.2809 | 0.7191 | 602800 | 74887 | 39133 | 27258 | 0.1620 | | 0.3225 | 9.0 | 6714 | 0.8035 | 0.1981 | 0.1908 | 0.2823 | 0.7177 | 602418 | 75372 | 39030 | 27625 | 0.1630 | | 0.3162 | 10.0 | 7460 | 0.8068 | 0.1976 | 0.1904 | 0.2816 | 0.7184 | 602335 | 75050 | 39435 | 27185 | 0.1625 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
fse/word2vec-google-news-300
fse
2021-12-02T16:46:03Z
0
38
null
[ "glove", "gensim", "fse", "arxiv:1301.3781", "arxiv:1310.4546", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - glove - gensim - fse --- # Word2Vec Pre-trained vectors trained on a part of the Google News dataset (about 100 billion words). The model contains 300-dimensional vectors for 3 million words and phrases. The phrases were obtained using a simple data-driven approach described in 'Distributed Representations of Words and Phrases and their Compositionality' Read more: * https://code.google.com/archive/p/word2vec/ * https://arxiv.org/abs/1301.3781 * https://arxiv.org/abs/1310.4546 * https://www.microsoft.com/en-us/research/publication/linguistic-regularities-in-continuous-space-word-representations/?from=http%3A%2F%2Fresearch.microsoft.com%2Fpubs%2F189726%2Frvecs.pdf
fse/glove-wiki-gigaword-50
fse
2021-12-02T16:45:04Z
0
1
null
[ "glove", "gensim", "fse", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - glove - gensim - fse --- # Glove Twitter Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased. Read more: * https://nlp.stanford.edu/projects/glove/ * https://nlp.stanford.edu/pubs/glove.pdf
fse/glove-wiki-gigaword-200
fse
2021-12-02T16:43:27Z
0
0
null
[ "glove", "gensim", "fse", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - glove - gensim - fse --- # Glove Twitter Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased. Read more: * https://nlp.stanford.edu/projects/glove/ * https://nlp.stanford.edu/pubs/glove.pdf
fse/glove-wiki-gigaword-100
fse
2021-12-02T16:42:45Z
0
1
null
[ "glove", "gensim", "fse", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - glove - gensim - fse --- # Glove Twitter Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased. Read more: * https://nlp.stanford.edu/projects/glove/ * https://nlp.stanford.edu/pubs/glove.pdf
huggingtweets/derspiegel
huggingtweets
2021-12-02T16:13:08Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/derspiegel/1638461583796/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1214723509521387520/7UENeEVp_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">DER SPIEGEL</div> <div style="text-align: center; font-size: 14px;">@derspiegel</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from DER SPIEGEL. | Data | DER SPIEGEL | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 478 | | Short tweets | 6 | | Tweets kept | 2766 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2uv8zr0k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @derspiegel's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/i3q4xu9o) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/i3q4xu9o/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/derspiegel') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
emrecan/bert-base-turkish-cased-allnli_tr
emrecan
2021-12-02T14:58:36Z
19
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "zero-shot-classification", "nli", "tr", "dataset:nli_tr", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
2022-03-02T23:29:05Z
--- language: - tr tags: - zero-shot-classification - nli - pytorch pipeline_tag: zero-shot-classification license: mit datasets: - nli_tr metrics: - accuracy widget: - text: "Dolar yükselmeye devam ediyor." candidate_labels: "ekonomi, siyaset, spor" - text: "Senaryo çok saçmaydı, beğendim diyemem." candidate_labels: "olumlu, olumsuz" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-turkish-cased_allnli_tr This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5771 - Accuracy: 0.7978 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.8559 | 0.03 | 1000 | 0.7577 | 0.6798 | | 0.6612 | 0.07 | 2000 | 0.7263 | 0.6958 | | 0.6115 | 0.1 | 3000 | 0.6431 | 0.7364 | | 0.5916 | 0.14 | 4000 | 0.6347 | 0.7407 | | 0.5719 | 0.17 | 5000 | 0.6317 | 0.7483 | | 0.5575 | 0.2 | 6000 | 0.6034 | 0.7544 | | 0.5521 | 0.24 | 7000 | 0.6148 | 0.7568 | | 0.5393 | 0.27 | 8000 | 0.5931 | 0.7610 | | 0.5382 | 0.31 | 9000 | 0.5866 | 0.7665 | | 0.5306 | 0.34 | 10000 | 0.5881 | 0.7594 | | 0.5295 | 0.37 | 11000 | 0.6120 | 0.7632 | | 0.5225 | 0.41 | 12000 | 0.5620 | 0.7759 | | 0.5112 | 0.44 | 13000 | 0.5641 | 0.7769 | | 0.5133 | 0.48 | 14000 | 0.5571 | 0.7798 | | 0.5023 | 0.51 | 15000 | 0.5719 | 0.7722 | | 0.5017 | 0.54 | 16000 | 0.5482 | 0.7844 | | 0.5111 | 0.58 | 17000 | 0.5503 | 0.7800 | | 0.4929 | 0.61 | 18000 | 0.5502 | 0.7836 | | 0.4923 | 0.65 | 19000 | 0.5424 | 0.7843 | | 0.4894 | 0.68 | 20000 | 0.5417 | 0.7851 | | 0.4877 | 0.71 | 21000 | 0.5514 | 0.7841 | | 0.4818 | 0.75 | 22000 | 0.5494 | 0.7848 | | 0.4898 | 0.78 | 23000 | 0.5450 | 0.7859 | | 0.4823 | 0.82 | 24000 | 0.5417 | 0.7878 | | 0.4806 | 0.85 | 25000 | 0.5354 | 0.7875 | | 0.4779 | 0.88 | 26000 | 0.5338 | 0.7848 | | 0.4744 | 0.92 | 27000 | 0.5277 | 0.7934 | | 0.4678 | 0.95 | 28000 | 0.5507 | 0.7871 | | 0.4727 | 0.99 | 29000 | 0.5603 | 0.7789 | | 0.4243 | 1.02 | 30000 | 0.5626 | 0.7894 | | 0.3955 | 1.05 | 31000 | 0.5324 | 0.7939 | | 0.4022 | 1.09 | 32000 | 0.5322 | 0.7925 | | 0.3976 | 1.12 | 33000 | 0.5450 | 0.7920 | | 0.3913 | 1.15 | 34000 | 0.5464 | 0.7948 | | 0.406 | 1.19 | 35000 | 0.5406 | 0.7958 | | 0.3875 | 1.22 | 36000 | 0.5489 | 0.7878 | | 0.4024 | 1.26 | 37000 | 0.5427 | 0.7925 | | 0.3988 | 1.29 | 38000 | 0.5335 | 0.7904 | | 0.393 | 1.32 | 39000 | 0.5415 | 0.7923 | | 0.3988 | 1.36 | 40000 | 0.5385 | 0.7962 | | 0.3912 | 1.39 | 41000 | 0.5383 | 0.7950 | | 0.3949 | 1.43 | 42000 | 0.5415 | 0.7931 | | 0.3902 | 1.46 | 43000 | 0.5438 | 0.7893 | | 0.3948 | 1.49 | 44000 | 0.5348 | 0.7906 | | 0.3921 | 1.53 | 45000 | 0.5361 | 0.7890 | | 0.3944 | 1.56 | 46000 | 0.5419 | 0.7953 | | 0.3959 | 1.6 | 47000 | 0.5402 | 0.7967 | | 0.3926 | 1.63 | 48000 | 0.5429 | 0.7925 | | 0.3854 | 1.66 | 49000 | 0.5346 | 0.7959 | | 0.3864 | 1.7 | 50000 | 0.5241 | 0.7979 | | 0.385 | 1.73 | 51000 | 0.5149 | 0.8002 | | 0.3871 | 1.77 | 52000 | 0.5325 | 0.8002 | | 0.3819 | 1.8 | 53000 | 0.5332 | 0.8022 | | 0.384 | 1.83 | 54000 | 0.5419 | 0.7873 | | 0.3899 | 1.87 | 55000 | 0.5225 | 0.7974 | | 0.3894 | 1.9 | 56000 | 0.5358 | 0.7977 | | 0.3838 | 1.94 | 57000 | 0.5264 | 0.7988 | | 0.3881 | 1.97 | 58000 | 0.5280 | 0.7956 | | 0.3756 | 2.0 | 59000 | 0.5601 | 0.7969 | | 0.3156 | 2.04 | 60000 | 0.5936 | 0.7925 | | 0.3125 | 2.07 | 61000 | 0.5898 | 0.7938 | | 0.3179 | 2.11 | 62000 | 0.5591 | 0.7981 | | 0.315 | 2.14 | 63000 | 0.5853 | 0.7970 | | 0.3122 | 2.17 | 64000 | 0.5802 | 0.7979 | | 0.3105 | 2.21 | 65000 | 0.5758 | 0.7979 | | 0.3076 | 2.24 | 66000 | 0.5685 | 0.7980 | | 0.3117 | 2.28 | 67000 | 0.5799 | 0.7944 | | 0.3108 | 2.31 | 68000 | 0.5742 | 0.7988 | | 0.3047 | 2.34 | 69000 | 0.5907 | 0.7921 | | 0.3114 | 2.38 | 70000 | 0.5723 | 0.7937 | | 0.3035 | 2.41 | 71000 | 0.5944 | 0.7955 | | 0.3129 | 2.45 | 72000 | 0.5838 | 0.7928 | | 0.3071 | 2.48 | 73000 | 0.5929 | 0.7949 | | 0.3061 | 2.51 | 74000 | 0.5794 | 0.7967 | | 0.3068 | 2.55 | 75000 | 0.5892 | 0.7954 | | 0.3053 | 2.58 | 76000 | 0.5796 | 0.7962 | | 0.3117 | 2.62 | 77000 | 0.5763 | 0.7981 | | 0.3062 | 2.65 | 78000 | 0.5852 | 0.7964 | | 0.3004 | 2.68 | 79000 | 0.5793 | 0.7966 | | 0.3146 | 2.72 | 80000 | 0.5693 | 0.7985 | | 0.3146 | 2.75 | 81000 | 0.5788 | 0.7982 | | 0.3079 | 2.79 | 82000 | 0.5726 | 0.7978 | | 0.3058 | 2.82 | 83000 | 0.5677 | 0.7988 | | 0.3055 | 2.85 | 84000 | 0.5701 | 0.7982 | | 0.3049 | 2.89 | 85000 | 0.5809 | 0.7970 | | 0.3044 | 2.92 | 86000 | 0.5741 | 0.7986 | | 0.3057 | 2.96 | 87000 | 0.5743 | 0.7980 | | 0.3081 | 2.99 | 88000 | 0.5771 | 0.7978 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
emrecan/convbert-base-turkish-mc4-cased-allnli_tr
emrecan
2021-12-02T14:57:01Z
97
2
transformers
[ "transformers", "pytorch", "convbert", "text-classification", "zero-shot-classification", "nli", "tr", "dataset:nli_tr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
2022-03-02T23:29:05Z
--- language: - tr tags: - zero-shot-classification - nli - pytorch pipeline_tag: zero-shot-classification license: apache-2.0 datasets: - nli_tr metrics: - accuracy widget: - text: "Dolar yükselmeye devam ediyor." candidate_labels: "ekonomi, siyaset, spor" - text: "Senaryo çok saçmaydı, beğendim diyemem." candidate_labels: "olumlu, olumsuz" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convbert-base-turkish-mc4-cased_allnli_tr This model is a fine-tuned version of [dbmdz/convbert-base-turkish-mc4-cased](https://huggingface.co/dbmdz/convbert-base-turkish-mc4-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5541 - Accuracy: 0.8111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.7338 | 0.03 | 1000 | 0.6722 | 0.7236 | | 0.603 | 0.07 | 2000 | 0.6465 | 0.7399 | | 0.5605 | 0.1 | 3000 | 0.5801 | 0.7728 | | 0.55 | 0.14 | 4000 | 0.5994 | 0.7626 | | 0.529 | 0.17 | 5000 | 0.5720 | 0.7697 | | 0.5196 | 0.2 | 6000 | 0.5692 | 0.7769 | | 0.5117 | 0.24 | 7000 | 0.5725 | 0.7785 | | 0.5044 | 0.27 | 8000 | 0.5532 | 0.7787 | | 0.5016 | 0.31 | 9000 | 0.5546 | 0.7812 | | 0.5031 | 0.34 | 10000 | 0.5461 | 0.7870 | | 0.4949 | 0.37 | 11000 | 0.5725 | 0.7826 | | 0.4894 | 0.41 | 12000 | 0.5419 | 0.7933 | | 0.4796 | 0.44 | 13000 | 0.5278 | 0.7914 | | 0.4795 | 0.48 | 14000 | 0.5193 | 0.7953 | | 0.4713 | 0.51 | 15000 | 0.5534 | 0.7771 | | 0.4738 | 0.54 | 16000 | 0.5098 | 0.8039 | | 0.481 | 0.58 | 17000 | 0.5244 | 0.7958 | | 0.4634 | 0.61 | 18000 | 0.5215 | 0.7972 | | 0.465 | 0.65 | 19000 | 0.5129 | 0.7985 | | 0.4624 | 0.68 | 20000 | 0.5062 | 0.8047 | | 0.4597 | 0.71 | 21000 | 0.5114 | 0.8029 | | 0.4571 | 0.75 | 22000 | 0.5070 | 0.8073 | | 0.4602 | 0.78 | 23000 | 0.5115 | 0.7993 | | 0.4552 | 0.82 | 24000 | 0.5085 | 0.8052 | | 0.4538 | 0.85 | 25000 | 0.5118 | 0.7974 | | 0.4517 | 0.88 | 26000 | 0.5036 | 0.8044 | | 0.4517 | 0.92 | 27000 | 0.4930 | 0.8062 | | 0.4413 | 0.95 | 28000 | 0.5307 | 0.7964 | | 0.4483 | 0.99 | 29000 | 0.5195 | 0.7938 | | 0.4036 | 1.02 | 30000 | 0.5238 | 0.8029 | | 0.3724 | 1.05 | 31000 | 0.5125 | 0.8082 | | 0.3777 | 1.09 | 32000 | 0.5099 | 0.8075 | | 0.3753 | 1.12 | 33000 | 0.5172 | 0.8053 | | 0.367 | 1.15 | 34000 | 0.5188 | 0.8053 | | 0.3819 | 1.19 | 35000 | 0.5218 | 0.8046 | | 0.363 | 1.22 | 36000 | 0.5202 | 0.7993 | | 0.3794 | 1.26 | 37000 | 0.5240 | 0.8048 | | 0.3749 | 1.29 | 38000 | 0.5026 | 0.8054 | | 0.367 | 1.32 | 39000 | 0.5198 | 0.8075 | | 0.3759 | 1.36 | 40000 | 0.5298 | 0.7993 | | 0.3701 | 1.39 | 41000 | 0.5072 | 0.8091 | | 0.3742 | 1.43 | 42000 | 0.5071 | 0.8098 | | 0.3706 | 1.46 | 43000 | 0.5317 | 0.8037 | | 0.3716 | 1.49 | 44000 | 0.5034 | 0.8052 | | 0.3717 | 1.53 | 45000 | 0.5258 | 0.8012 | | 0.3714 | 1.56 | 46000 | 0.5195 | 0.8050 | | 0.3781 | 1.6 | 47000 | 0.5004 | 0.8104 | | 0.3725 | 1.63 | 48000 | 0.5124 | 0.8113 | | 0.3624 | 1.66 | 49000 | 0.5040 | 0.8094 | | 0.3657 | 1.7 | 50000 | 0.4979 | 0.8111 | | 0.3669 | 1.73 | 51000 | 0.4968 | 0.8100 | | 0.3636 | 1.77 | 52000 | 0.5075 | 0.8079 | | 0.36 | 1.8 | 53000 | 0.4985 | 0.8110 | | 0.3624 | 1.83 | 54000 | 0.5125 | 0.8070 | | 0.366 | 1.87 | 55000 | 0.4918 | 0.8117 | | 0.3655 | 1.9 | 56000 | 0.5051 | 0.8109 | | 0.3609 | 1.94 | 57000 | 0.5083 | 0.8105 | | 0.3672 | 1.97 | 58000 | 0.5129 | 0.8085 | | 0.3545 | 2.0 | 59000 | 0.5467 | 0.8109 | | 0.2938 | 2.04 | 60000 | 0.5635 | 0.8049 | | 0.29 | 2.07 | 61000 | 0.5781 | 0.8041 | | 0.2992 | 2.11 | 62000 | 0.5470 | 0.8077 | | 0.2957 | 2.14 | 63000 | 0.5765 | 0.8073 | | 0.292 | 2.17 | 64000 | 0.5472 | 0.8106 | | 0.2893 | 2.21 | 65000 | 0.5590 | 0.8085 | | 0.2883 | 2.24 | 66000 | 0.5535 | 0.8064 | | 0.2923 | 2.28 | 67000 | 0.5508 | 0.8095 | | 0.2868 | 2.31 | 68000 | 0.5679 | 0.8098 | | 0.2892 | 2.34 | 69000 | 0.5660 | 0.8057 | | 0.292 | 2.38 | 70000 | 0.5494 | 0.8088 | | 0.286 | 2.41 | 71000 | 0.5653 | 0.8085 | | 0.2939 | 2.45 | 72000 | 0.5673 | 0.8070 | | 0.286 | 2.48 | 73000 | 0.5600 | 0.8092 | | 0.2844 | 2.51 | 74000 | 0.5508 | 0.8095 | | 0.2913 | 2.55 | 75000 | 0.5645 | 0.8088 | | 0.2859 | 2.58 | 76000 | 0.5677 | 0.8095 | | 0.2892 | 2.62 | 77000 | 0.5598 | 0.8113 | | 0.2898 | 2.65 | 78000 | 0.5618 | 0.8096 | | 0.2814 | 2.68 | 79000 | 0.5664 | 0.8103 | | 0.2917 | 2.72 | 80000 | 0.5484 | 0.8122 | | 0.2907 | 2.75 | 81000 | 0.5522 | 0.8116 | | 0.2896 | 2.79 | 82000 | 0.5540 | 0.8093 | | 0.2907 | 2.82 | 83000 | 0.5469 | 0.8104 | | 0.2882 | 2.85 | 84000 | 0.5471 | 0.8122 | | 0.2878 | 2.89 | 85000 | 0.5532 | 0.8108 | | 0.2858 | 2.92 | 86000 | 0.5511 | 0.8115 | | 0.288 | 2.96 | 87000 | 0.5491 | 0.8111 | | 0.2834 | 2.99 | 88000 | 0.5541 | 0.8111 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
eliotm/t5-small-finetuned-en-to-ro-LR_1e-3
eliotm
2021-12-02T14:05:14Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wmt16", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: t5-small-finetuned-en-to-ro-LR_1e-3 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16 type: wmt16 args: ro-en metrics: - name: Bleu type: bleu value: 7.1606 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-en-to-ro-LR_1e-3 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.5215 - Bleu: 7.1606 - Gen Len: 18.2451 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | 0.6758 | 1.0 | 7629 | 1.5215 | 7.1606 | 18.2451 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
fse/glove-twitter-25
fse
2021-12-02T13:39:31Z
0
0
null
[ "glove", "gensim", "fse", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - glove - gensim - fse --- # Glove Twitter Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased. Read more: * https://nlp.stanford.edu/projects/glove/ * https://nlp.stanford.edu/pubs/glove.pdf
ai-forever/rudalle-Emojich
ai-forever
2021-12-02T11:06:48Z
0
16
null
[ "pytorch", "region:us" ]
null
2022-03-02T23:29:05Z
# Emojich ![](./pics/emojich_rgba_100.png) ### generate emojis from text Model was trained by [Sber AI](https://github.com/sberbank-ai) * Task: `text2image generation` * Num Parameters: `1.3 B` * Training Data Volume: `120 million text-image pairs` & [`2749 text-emoji pairs`](https://www.kaggle.com/shonenkov/russian-emoji) [![Telegram](https://img.shields.io/badge/Telegram-Stickers-blue?style=for-the-badge&logo=data:image/svg%2bxml;base64,PHN2ZyBlbmFibGUtYmFja2dyb3VuZD0ibmV3IDAgMCAyNCAyNCIgaGVpZ2h0PSI1MTIiIHZpZXdCb3g9IjAgMCAyNCAyNCIgd2lkdGg9IjUxMiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj48cGF0aCBkPSJtOS40MTcgMTUuMTgxLS4zOTcgNS41ODRjLjU2OCAwIC44MTQtLjI0NCAxLjEwOS0uNTM3bDIuNjYzLTIuNTQ1IDUuNTE4IDQuMDQxYzEuMDEyLjU2NCAxLjcyNS4yNjcgMS45OTgtLjkzMWwzLjYyMi0xNi45NzIuMDAxLS4wMDFjLjMyMS0xLjQ5Ni0uNTQxLTIuMDgxLTEuNTI3LTEuNzE0bC0yMS4yOSA4LjE1MWMtMS40NTMuNTY0LTEuNDMxIDEuMzc0LS4yNDcgMS43NDFsNS40NDMgMS42OTMgMTIuNjQzLTcuOTExYy41OTUtLjM5NCAxLjEzNi0uMTc2LjY5MS4yMTh6IiBmaWxsPSIjMDM5YmU1Ii8+PC9zdmc+)](https://telegram.me/addstickers/SberAI_ruDALLE) ### Model Description 😋 Emojich is a 1.3 billion params model from the family GPT3-like, it generates emoji-style images with the brain of ◾ Malevich. ### Fine-tuning stage: The main goal of fine-tuning is trying to keep the generalization of [ruDALL-E Malevich (XL)](https://huggingface.co/sberbank-ai/rudalle-Malevich) model on text to emoji tasks. ruDALL-E Malevich is a multi-modality big pretrained transformer, that uses images and texts. The idea with freezing feedforward and self-attention layers in pretrained transformer is demonstrated high performance in changing different modalities. Also, the model has a good chance for over-fitting text modality and lost generalization. To deal with this problem is increased coefficient 10^3 in weighted cross-entropy loss for image codebooks part. Full version of training code is available on Kaggle: [![Kaggle](https://kaggle.com/static/images/open-in-kaggle.svg)](https://www.kaggle.com/shonenkov/emojich-rudall-e) ### Examples of generated emojis All examples are generated automatically (without manual cherry-picking) with hyper-parameters: seed 42, batch size 16, top-k 2048, top-p 0.995, temperature 1.0, GPU A100. For making better generative emojis should use more attempts (~512) and select the best one manually. *Remember, the great art makers became "great" after creating just only one masterpiece.* ![](./pics/examples.png)
chandank/bart-base-finetuned-kaggglenews-batch8
chandank
2021-12-02T09:16:30Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bart-base-finetuned-kaggglenews-batch8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-kaggglenews-batch8 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:| | No log | 1.0 | 495 | 1.6409 | 27.9647 | 15.4352 | 23.611 | 25.107 | 20.0 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
Jeska/VaccinChatSentenceClassifierDutch_fromBERTjeDIAL
Jeska
2021-12-02T08:29:44Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: VaccinChatSentenceClassifierDutch_fromBERTjeDIAL results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # VaccinChatSentenceClassifierDutch_fromBERTjeDIAL This model is a fine-tuned version of [Jeska/BertjeWDialDataQA20k](https://huggingface.co/Jeska/BertjeWDialDataQA20k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8355 - Accuracy: 0.6322 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.4418 | 1.0 | 1457 | 2.3866 | 0.5406 | | 1.7742 | 2.0 | 2914 | 1.9365 | 0.6069 | | 1.1313 | 3.0 | 4371 | 1.8355 | 0.6322 | ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0 - Datasets 1.16.1 - Tokenizers 0.10.3
BSen/wav2vec2-base-timit-demo-colab
BSen
2021-12-02T07:51:26Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4877 - Wer: 0.4895 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.6615 | 4.0 | 500 | 1.7423 | 1.0723 | | 0.8519 | 8.0 | 1000 | 0.4877 | 0.4895 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
chopey/testmntdv
chopey
2021-12-02T02:48:18Z
3
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
Test English-Dhivehi/Dhivehi-English NMT Would need a lot more data to get accurate translations.
huggingtweets/afm_marketing
huggingtweets
2021-12-02T01:51:26Z
19
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1216156392/afm-marketing_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">AFM Marketing</div> <div style="text-align: center; font-size: 14px;">@afm_marketing</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from AFM Marketing. | Data | AFM Marketing | | --- | --- | | Tweets downloaded | 3238 | | Retweets | 1051 | | Short tweets | 64 | | Tweets kept | 2123 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/6tgdc3wa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @afm_marketing's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/36mudapr) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/36mudapr/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/afm_marketing') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
BigSalmon/FormalRobertaaa
BigSalmon
2021-12-02T00:23:58Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
https://huggingface.co/spaces/BigSalmon/MASK2
BigSalmon/FormalBerta3
BigSalmon
2021-12-02T00:20:12Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
https://huggingface.co/spaces/BigSalmon/MASK2
BigSalmon/MrLincoln11
BigSalmon
2021-12-01T20:17:55Z
10
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
Informal to Formal: ``` from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("gpt2") model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln11") ``` ``` How To Make Prompt: Original: freedom of the press is a check against political corruption. Edited: fundamental to the spirit of democracy, freedom of the press is a check against political corruption. Edited 2: ever at odds with tyranny, freedom of the press is a check against political corruption. Edited 3: never to be neglected, freedom of the press is a check against political corruption. Original: solar is a beacon of achievement. Edited: central to decoupling from the perils of unsustainable energy, solar is a beacon of achievement. Edited 2: key to a future beyond fossil fuels, solar is a beacon of achievement. Original: milan is nevertheless ambivalent towards his costly terms. Edited: keen on contracting him, milan is nevertheless ambivalent towards his costly terms. Edited 2: intent on securing his services, milan is nevertheless ambivalent towards his costly terms. Original: ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. informal english: meteors are much harder to see, because they are only there for a fraction of a second. Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second. informal english: ````
emrecan/bert-base-multilingual-cased-multinli_tr
emrecan
2021-12-01T19:45:01Z
30
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "zero-shot-classification", "nli", "tr", "dataset:nli_tr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
2022-03-02T23:29:05Z
--- language: - tr tags: - zero-shot-classification - nli - pytorch pipeline_tag: zero-shot-classification license: apache-2.0 datasets: - nli_tr widget: - text: "Dolar yükselmeye devam ediyor." candidate_labels: "ekonomi, siyaset, spor" - text: "Senaryo çok saçmaydı, beğendim diyemem." candidate_labels: "olumlu, olumsuz" ---
emrecan/convbert-base-turkish-mc4-cased-multinli_tr
emrecan
2021-12-01T19:44:01Z
4
0
transformers
[ "transformers", "pytorch", "convbert", "text-classification", "zero-shot-classification", "nli", "tr", "dataset:nli_tr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
2022-03-02T23:29:05Z
--- language: - tr tags: - zero-shot-classification - nli - pytorch pipeline_tag: zero-shot-classification license: apache-2.0 datasets: - nli_tr widget: - text: "Dolar yükselmeye devam ediyor." candidate_labels: "ekonomi, siyaset, spor" - text: "Senaryo çok saçmaydı, beğendim diyemem." candidate_labels: "olumlu, olumsuz" ---
emrecan/convbert-base-turkish-mc4-cased-snli_tr
emrecan
2021-12-01T19:43:30Z
6
0
transformers
[ "transformers", "pytorch", "convbert", "text-classification", "zero-shot-classification", "nli", "tr", "dataset:nli_tr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
2022-03-02T23:29:05Z
--- language: - tr tags: - zero-shot-classification - nli - pytorch pipeline_tag: zero-shot-classification license: apache-2.0 datasets: - nli_tr widget: - text: "Dolar yükselmeye devam ediyor." candidate_labels: "ekonomi, siyaset, spor" - text: "Senaryo çok saçmaydı, beğendim diyemem." candidate_labels: "olumlu, olumsuz" ---
emrecan/distilbert-base-turkish-cased-snli_tr
emrecan
2021-12-01T19:42:34Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "zero-shot-classification", "nli", "tr", "dataset:nli_tr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
2022-03-02T23:29:05Z
--- language: - tr tags: - zero-shot-classification - nli - pytorch pipeline_tag: zero-shot-classification license: apache-2.0 datasets: - nli_tr widget: - text: "Dolar yükselmeye devam ediyor." candidate_labels: "ekonomi, siyaset, spor" - text: "Senaryo çok saçmaydı, beğendim diyemem." candidate_labels: "olumlu, olumsuz" ---
hankzhong/electra-small-discriminator-finetuned-squad
hankzhong
2021-12-01T19:04:28Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: electra-small-discriminator-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-small-discriminator-finetuned-squad This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2174 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5751 | 1.0 | 2767 | 1.3952 | | 1.2939 | 2.0 | 5534 | 1.2458 | | 1.1866 | 3.0 | 8301 | 1.2174 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
Narrativaai/deberta-v3-small-finetuned-hate_speech18
Narrativaai
2021-12-01T17:41:13Z
9
3
transformers
[ "transformers", "pytorch", "tensorboard", "deberta-v2", "text-classification", "generated_from_trainer", "dataset:hate_speech18", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer datasets: - hate_speech18 widget: - text: "ok, so do we need to kill them too or are the slavs okay ? for some reason whenever i hear the word slav , the word slobber comes to mind and i picture a slobbering half breed creature like the humpback of notre dame or Igor haha" metrics: - accuracy model-index: - name: deberta-v3-small-hate-speech results: - task: name: Text Classification type: text-classification dataset: name: hate_speech18 type: hate_speech18 args: default metrics: - name: Accuracy type: accuracy value: 0.916058394160584 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DeBERTa v3 small fine-tuned on hate_speech18 dataset for Hate Speech Detection This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the hate_speech18 dataset. It achieves the following results on the evaluation set: - Loss: 0.2922 - Accuracy: 0.9161 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4147 | 1.0 | 650 | 0.3910 | 0.8832 | | 0.2975 | 2.0 | 1300 | 0.2922 | 0.9161 | | 0.2575 | 3.0 | 1950 | 0.3555 | 0.9051 | | 0.1553 | 4.0 | 2600 | 0.4263 | 0.9124 | | 0.1267 | 5.0 | 3250 | 0.4238 | 0.9161 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
rossanez/t5-small-finetuned-de-en-256
rossanez
2021-12-01T11:08:44Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wmt14", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wmt14 model-index: - name: t5-small-finetuned-de-en-256 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-de-en-256 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 188 | 2.2663 | 4.5343 | 17.698 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
rossanez/t5-small-finetuned-de-en-64
rossanez
2021-12-01T11:02:01Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wmt14", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wmt14 model-index: - name: t5-small-finetuned-de-en-64 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-de-en-64 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 188 | 2.3808 | 3.1482 | 17.8019 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
rossanez/t5-base-finetuned-de-en
rossanez
2021-12-01T10:55:50Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wmt14", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wmt14 model-index: - name: t5-base-finetuned-de-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-finetuned-de-en This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 188 | 2.4324 | 1.2308 | 17.8904 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
emrecan/distilbert-base-turkish-cased-multinli_tr
emrecan
2021-12-01T10:50:34Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "zero-shot-classification", "nli", "tr", "dataset:nli_tr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
2022-03-02T23:29:05Z
--- language: - tr tags: - zero-shot-classification - nli - pytorch pipeline_tag: zero-shot-classification license: apache-2.0 datasets: - nli_tr widget: - text: "Dolar yükselmeye devam ediyor." candidate_labels: "ekonomi, siyaset, spor" - text: "Senaryo çok saçmaydı, beğendim diyemem." candidate_labels: "olumlu, olumsuz" ---
emrecan/bert-base-turkish-cased-multinli_tr
emrecan
2021-12-01T10:45:51Z
8
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "zero-shot-classification", "nli", "tr", "dataset:nli_tr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
2022-03-02T23:29:05Z
--- language: - tr tags: - zero-shot-classification - nli - pytorch pipeline_tag: zero-shot-classification license: apache-2.0 datasets: - nli_tr widget: - text: "Dolar yükselmeye devam ediyor." candidate_labels: "ekonomi, siyaset, spor" - text: "Senaryo çok saçmaydı, beğendim diyemem." candidate_labels: "olumlu, olumsuz" ---
glasses/vit_base_patch16_384
glasses
2021-12-01T08:26:46Z
1
0
transformers
[ "transformers", "pytorch", "arxiv:2010.11929", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# vit_base_patch16_384 Implementation of Vision Transformer (ViT) proposed in [An Image Is Worth 16x16 Words: Transformers For Image Recognition At Scale](https://arxiv.org/pdf/2010.11929.pdf) The following image from the authors shows the architecture. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/ViT.png?raw=true) ``` python ViT.vit_small_patch16_224() ViT.vit_base_patch16_224() ViT.vit_base_patch16_384() ViT.vit_base_patch32_384() ViT.vit_huge_patch16_224() ViT.vit_huge_patch32_384() ViT.vit_large_patch16_224() ViT.vit_large_patch16_384() ViT.vit_large_patch32_384() ``` Examples: ``` python # change activation ViT.vit_base_patch16_224(activation = nn.SELU) # change number of classes (default is 1000 ) ViT.vit_base_patch16_224(n_classes=100) # pass a different block, default is TransformerEncoderBlock ViT.vit_base_patch16_224(block=MyCoolTransformerBlock) # get features model = ViT.vit_base_patch16_224 # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[[torch.Size([1, 197, 768]), torch.Size([1, 197, 768]), ...] # change the tokens, you have to subclass ViTTokens class MyTokens(ViTTokens): def __init__(self, emb_size: int): super().__init__(emb_size) self.my_new_token = nn.Parameter(torch.randn(1, 1, emb_size)) ViT(tokens=MyTokens) ```
glasses/vit_base_patch16_224
glasses
2021-12-01T08:23:58Z
31
0
transformers
[ "transformers", "pytorch", "arxiv:2010.11929", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# vit_base_patch16_224 Implementation of Vision Transformer (ViT) proposed in [An Image Is Worth 16x16 Words: Transformers For Image Recognition At Scale](https://arxiv.org/pdf/2010.11929.pdf) The following image from the authors shows the architecture. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/ViT.png?raw=true) ``` python ViT.vit_small_patch16_224() ViT.vit_base_patch16_224() ViT.vit_base_patch16_384() ViT.vit_base_patch32_384() ViT.vit_huge_patch16_224() ViT.vit_huge_patch32_384() ViT.vit_large_patch16_224() ViT.vit_large_patch16_384() ViT.vit_large_patch32_384() ``` Examples: ``` python # change activation ViT.vit_base_patch16_224(activation = nn.SELU) # change number of classes (default is 1000 ) ViT.vit_base_patch16_224(n_classes=100) # pass a different block, default is TransformerEncoderBlock ViT.vit_base_patch16_224(block=MyCoolTransformerBlock) # get features model = ViT.vit_base_patch16_224 # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[[torch.Size([1, 197, 768]), torch.Size([1, 197, 768]), ...] # change the tokens, you have to subclass ViTTokens class MyTokens(ViTTokens): def __init__(self, emb_size: int): super().__init__(emb_size) self.my_new_token = nn.Parameter(torch.randn(1, 1, emb_size)) ViT(tokens=MyTokens) ```
glasses/efficientnet_b3
glasses
2021-12-01T08:08:37Z
2
0
transformers
[ "transformers", "pytorch", "arxiv:1905.11946", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# efficientnet_b3 Implementation of EfficientNet proposed in [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/EfficientNet.png?raw=true) The basic architecture is similar to MobileNetV2 as was computed by using [Progressive Neural Architecture Search](https://arxiv.org/abs/1905.11946) . The following table shows the basic architecture (EfficientNet-efficientnet\_b0): ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/EfficientNetModelsTable.jpeg?raw=true) Then, the architecture is scaled up from [-efficientnet\_b0]{.title-ref} to [-efficientnet\_b7]{.title-ref} using compound scaling. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/EfficientNetScaling.jpg?raw=true) ``` python EfficientNet.efficientnet_b0() EfficientNet.efficientnet_b1() EfficientNet.efficientnet_b2() EfficientNet.efficientnet_b3() EfficientNet.efficientnet_b4() EfficientNet.efficientnet_b5() EfficientNet.efficientnet_b6() EfficientNet.efficientnet_b7() EfficientNet.efficientnet_b8() EfficientNet.efficientnet_l2() ``` Examples: ``` python EfficientNet.efficientnet_b0(activation = nn.SELU) # change number of classes (default is 1000 ) EfficientNet.efficientnet_b0(n_classes=100) # pass a different block EfficientNet.efficientnet_b0(block=...) # store each feature x = torch.rand((1, 3, 224, 224)) model = EfficientNet.efficientnet_b0() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) # [torch.Size([1, 32, 112, 112]), torch.Size([1, 24, 56, 56]), torch.Size([1, 40, 28, 28]), torch.Size([1, 80, 14, 14])] ```
glasses/efficientnet_b0
glasses
2021-12-01T08:07:32Z
2
0
transformers
[ "transformers", "pytorch", "arxiv:1905.11946", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# efficientnet_b0 Implementation of EfficientNet proposed in [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/EfficientNet.png?raw=true) The basic architecture is similar to MobileNetV2 as was computed by using [Progressive Neural Architecture Search](https://arxiv.org/abs/1905.11946) . The following table shows the basic architecture (EfficientNet-efficientnet\_b0): ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/EfficientNetModelsTable.jpeg?raw=true) Then, the architecture is scaled up from [-efficientnet\_b0]{.title-ref} to [-efficientnet\_b7]{.title-ref} using compound scaling. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/EfficientNetScaling.jpg?raw=true) ``` python EfficientNet.efficientnet_b0() EfficientNet.efficientnet_b1() EfficientNet.efficientnet_b2() EfficientNet.efficientnet_b3() EfficientNet.efficientnet_b4() EfficientNet.efficientnet_b5() EfficientNet.efficientnet_b6() EfficientNet.efficientnet_b7() EfficientNet.efficientnet_b8() EfficientNet.efficientnet_l2() ``` Examples: ``` python EfficientNet.efficientnet_b0(activation = nn.SELU) # change number of classes (default is 1000 ) EfficientNet.efficientnet_b0(n_classes=100) # pass a different block EfficientNet.efficientnet_b0(block=...) # store each feature x = torch.rand((1, 3, 224, 224)) model = EfficientNet.efficientnet_b0() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) # [torch.Size([1, 32, 112, 112]), torch.Size([1, 24, 56, 56]), torch.Size([1, 40, 28, 28]), torch.Size([1, 80, 14, 14])] ```
glasses/vgg13_bn
glasses
2021-12-01T08:02:05Z
1
0
transformers
[ "transformers", "pytorch", "arxiv:1409.1556", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# vgg13_bn Implementation of VGG proposed in [Very Deep Convolutional Networks For Large-Scale Image Recognition](https://arxiv.org/pdf/1409.1556.pdf) ``` python VGG.vgg11() VGG.vgg13() VGG.vgg16() VGG.vgg19() VGG.vgg11_bn() VGG.vgg13_bn() VGG.vgg16_bn() VGG.vgg19_bn() ``` Please be aware that the [bn]{.title-ref} models uses BatchNorm but they are very old and people back then don\'t know the bias is superfluous in a conv followed by a batchnorm. Examples: ``` python # change activation VGG.vgg11(activation = nn.SELU) # change number of classes (default is 1000 ) VGG.vgg11(n_classes=100) # pass a different block from nn.models.classification.senet import SENetBasicBlock VGG.vgg11(block=SENetBasicBlock) # store the features tensor after every block ```
raphaelmerx/distilbert-base-uncased-finetuned-imdb
raphaelmerx
2021-12-01T07:54:16Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4722 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7117 | 1.0 | 157 | 2.4977 | | 2.5783 | 2.0 | 314 | 2.4241 | | 2.5375 | 3.0 | 471 | 2.4358 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
glasses/vgg11
glasses
2021-12-01T07:53:25Z
2
0
transformers
[ "transformers", "pytorch", "arxiv:1409.1556", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# vgg11 Implementation of VGG proposed in [Very Deep Convolutional Networks For Large-Scale Image Recognition](https://arxiv.org/pdf/1409.1556.pdf) ``` python VGG.vgg11() VGG.vgg13() VGG.vgg16() VGG.vgg19() VGG.vgg11_bn() VGG.vgg13_bn() VGG.vgg16_bn() VGG.vgg19_bn() ``` Please be aware that the [bn]{.title-ref} models uses BatchNorm but they are very old and people back then don\'t know the bias is superfluous in a conv followed by a batchnorm. Examples: ``` python # change activation VGG.vgg11(activation = nn.SELU) # change number of classes (default is 1000 ) VGG.vgg11(n_classes=100) # pass a different block from nn.models.classification.senet import SENetBasicBlock VGG.vgg11(block=SENetBasicBlock) # store the features tensor after every block ```
glasses/densenet161
glasses
2021-12-01T07:50:20Z
2
0
transformers
[ "transformers", "pytorch", "arxiv:1608.06993", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# densenet161 Implementation of DenseNet proposed in [Densely Connected Convolutional Networks](https://arxiv.org/abs/1608.06993) Create a default models ``` {.sourceCode .} DenseNet.densenet121() DenseNet.densenet161() DenseNet.densenet169() DenseNet.densenet201() ``` Examples: ``` {.sourceCode .} # change activation DenseNet.densenet121(activation = nn.SELU) # change number of classes (default is 1000 ) DenseNet.densenet121(n_classes=100) # pass a different block DenseNet.densenet121(block=...) # change the initial convolution model = DenseNet.densenet121() model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3) # store each feature x = torch.rand((1, 3, 224, 224)) model = DenseNet.densenet121() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) # [torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14]), torch.Size([1, 512, 7, 7]), torch.Size([1, 1024, 7, 7])] ```
glasses/densenet201
glasses
2021-12-01T07:49:34Z
4
0
transformers
[ "transformers", "pytorch", "arxiv:1608.06993", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# densenet201 Implementation of DenseNet proposed in [Densely Connected Convolutional Networks](https://arxiv.org/abs/1608.06993) Create a default models ``` {.sourceCode .} DenseNet.densenet121() DenseNet.densenet161() DenseNet.densenet169() DenseNet.densenet201() ``` Examples: ``` {.sourceCode .} # change activation DenseNet.densenet121(activation = nn.SELU) # change number of classes (default is 1000 ) DenseNet.densenet121(n_classes=100) # pass a different block DenseNet.densenet121(block=...) # change the initial convolution model = DenseNet.densenet121() model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3) # store each feature x = torch.rand((1, 3, 224, 224)) model = DenseNet.densenet121() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) # [torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14]), torch.Size([1, 512, 7, 7]), torch.Size([1, 1024, 7, 7])] ```
glasses/densenet169
glasses
2021-12-01T07:48:55Z
1
0
transformers
[ "transformers", "pytorch", "arxiv:1608.06993", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# densenet169 Implementation of DenseNet proposed in [Densely Connected Convolutional Networks](https://arxiv.org/abs/1608.06993) Create a default models ``` {.sourceCode .} DenseNet.densenet121() DenseNet.densenet161() DenseNet.densenet169() DenseNet.densenet201() ``` Examples: ``` {.sourceCode .} # change activation DenseNet.densenet121(activation = nn.SELU) # change number of classes (default is 1000 ) DenseNet.densenet121(n_classes=100) # pass a different block DenseNet.densenet121(block=...) # change the initial convolution model = DenseNet.densenet121() model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3) # store each feature x = torch.rand((1, 3, 224, 224)) model = DenseNet.densenet121() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) # [torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14]), torch.Size([1, 512, 7, 7]), torch.Size([1, 1024, 7, 7])] ```
glasses/regnety_008
glasses
2021-12-01T07:46:29Z
4
0
transformers
[ "transformers", "pytorch", "arxiv:2003.13678", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# regnety_008 Implementation of RegNet proposed in [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) The main idea is to start with a high dimensional search space and iteratively reduce the search space by empirically apply constrains based on the best performing models sampled by the current search space. The resulting models are light, accurate, and faster than EfficientNets (up to 5x times!) For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the bottleneck ratio $b_i$ for all stage $i$. The following table shows all the restrictions applied from one search space to the next one. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/RegNetDesignSpaceTable.png?raw=true) The paper is really well written and very interesting, I highly recommended read it. ``` python ResNet.regnetx_002() ResNet.regnetx_004() ResNet.regnetx_006() ResNet.regnetx_008() ResNet.regnetx_016() ResNet.regnetx_040() ResNet.regnetx_064() ResNet.regnetx_080() ResNet.regnetx_120() ResNet.regnetx_160() ResNet.regnetx_320() # Y variants (with SE) ResNet.regnety_002() # ... ResNet.regnetx_320() You can easily customize your model ``` Examples: ``` python # change activation RegNet.regnetx_004(activation = nn.SELU) # change number of classes (default is 1000 ) RegNet.regnetx_004(n_classes=100) # pass a different block RegNet.regnetx_004(block=RegNetYBotteneckBlock) # change the steam model = RegNet.regnetx_004(stem=ResNetStemC) change shortcut model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = RegNet.regnetx_004() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])] ```
glasses/regnety_002
glasses
2021-12-01T07:45:22Z
4
0
transformers
[ "transformers", "pytorch", "arxiv:2003.13678", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# regnety_002 Implementation of RegNet proposed in [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) The main idea is to start with a high dimensional search space and iteratively reduce the search space by empirically apply constrains based on the best performing models sampled by the current search space. The resulting models are light, accurate, and faster than EfficientNets (up to 5x times!) For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the bottleneck ratio $b_i$ for all stage $i$. The following table shows all the restrictions applied from one search space to the next one. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/RegNetDesignSpaceTable.png?raw=true) The paper is really well written and very interesting, I highly recommended read it. ``` python ResNet.regnetx_002() ResNet.regnetx_004() ResNet.regnetx_006() ResNet.regnetx_008() ResNet.regnetx_016() ResNet.regnetx_040() ResNet.regnetx_064() ResNet.regnetx_080() ResNet.regnetx_120() ResNet.regnetx_160() ResNet.regnetx_320() # Y variants (with SE) ResNet.regnety_002() # ... ResNet.regnetx_320() You can easily customize your model ``` Examples: ``` python # change activation RegNet.regnetx_004(activation = nn.SELU) # change number of classes (default is 1000 ) RegNet.regnetx_004(n_classes=100) # pass a different block RegNet.regnetx_004(block=RegNetYBotteneckBlock) # change the steam model = RegNet.regnetx_004(stem=ResNetStemC) change shortcut model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = RegNet.regnetx_004() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])] ```
rossanez/t5-small-finetuned-de-en-256-epochs2
rossanez
2021-12-01T01:08:03Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wmt14", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wmt14 metrics: - bleu model-index: - name: t5-small-finetuned-de-en-256-epochs2 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt14 type: wmt14 args: de-en metrics: - name: Bleu type: bleu value: 7.8579 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-de-en-256-epochs2 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset. It achieves the following results on the evaluation set: - Loss: 2.1073 - Bleu: 7.8579 - Gen Len: 17.3896 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 188 | 2.1179 | 7.8498 | 17.382 | | No log | 2.0 | 376 | 2.1073 | 7.8579 | 17.3896 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
rossanez/t5-small-finetuned-de-en-256-lr2e-4
rossanez
2021-12-01T00:40:20Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wmt14", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wmt14 model-index: - name: t5-small-finetuned-de-en-256-lr2e-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-de-en-256-lr2e-4 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 188 | 2.1169 | 7.6948 | 17.4103 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
kaporter/bert-base-uncased-finetuned-squad
kaporter
2021-11-30T22:42:17Z
267
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model_index: - name: bert-base-uncased-finetuned-squad results: - task: name: Question Answering type: question-answering dataset: name: squad type: squad args: plain_text --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-squad This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.0725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.0749 | 1.0 | 5533 | 1.0167 | | 0.7851 | 2.0 | 11066 | 1.0299 | | 0.6067 | 3.0 | 16599 | 1.0725 | ### Framework versions - Transformers 4.8.1 - Pytorch 1.8.1 - Datasets 1.16.1 - Tokenizers 0.10.1
nouamanetazi/cover-letter-t5-base
nouamanetazi
2021-11-30T21:14:47Z
7
4
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "t5-base", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 tags: - generated_from_trainer - t5-base model-index: - name: cover-letter-t5-base results: [] widget: - text: "coverletter name: Nouamane Tazi job: Machine Learning Engineer at HuggingFace background: Master's student in AI at the University of Paris Saclay experiences: I participated in the Digital Tech Year program, developing three minimal valuable products for three companies in a 7-week constraint. I also spent 1 year as a machine learning engineer for Flashbrand where I mainly worked on their chatbot . And I recently completed the HuggingFace course, where I built an amazing huggingface space. I am a strong team player." --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cover-letter-t5-base This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on cover letter samples scraped from Indeed and JobHero. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
glasses/regnetx_006
glasses
2021-11-30T20:26:24Z
6
0
transformers
[ "transformers", "pytorch", "arxiv:2003.13678", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# regnetx_006 Implementation of RegNet proposed in [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) The main idea is to start with a high dimensional search space and iteratively reduce the search space by empirically apply constrains based on the best performing models sampled by the current search space. The resulting models are light, accurate, and faster than EfficientNets (up to 5x times!) For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the bottleneck ratio $b_i$ for all stage $i$. The following table shows all the restrictions applied from one search space to the next one. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/RegNetDesignSpaceTable.png?raw=true) The paper is really well written and very interesting, I highly recommended read it. ``` python ResNet.regnetx_002() ResNet.regnetx_004() ResNet.regnetx_006() ResNet.regnetx_008() ResNet.regnetx_016() ResNet.regnetx_040() ResNet.regnetx_064() ResNet.regnetx_080() ResNet.regnetx_120() ResNet.regnetx_160() ResNet.regnetx_320() # Y variants (with SE) ResNet.regnety_002() # ... ResNet.regnetx_320() You can easily customize your model ``` Examples: ``` python # change activation RegNet.regnetx_004(activation = nn.SELU) # change number of classes (default is 1000 ) RegNet.regnetx_004(n_classes=100) # pass a different block RegNet.regnetx_004(block=RegNetYBotteneckBlock) # change the steam model = RegNet.regnetx_004(stem=ResNetStemC) change shortcut model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = RegNet.regnetx_004() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])] ```
glasses/eca_resnet26t
glasses
2021-11-30T20:21:22Z
31
0
transformers
[ "transformers", "pytorch", "image-classification", "dataset:imagenet", "arxiv:1512.03385", "arxiv:1812.01187", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification datasets: - imagenet --- # eca_resnet26t Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d() # You can construct your own one by chaning `stem` and `block` resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD)) ``` Examples: ``` python # change activation ResNet.resnet18(activation = nn.SELU) # change number of classes (default is 1000 ) ResNet.resnet18(n_classes=100) # pass a different block ResNet.resnet18(block=SENetBasicBlock) # change the steam model = ResNet.resnet18(stem=ResNetStemC) change shortcut model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = ResNet.resnet18() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
glasses/resnet152
glasses
2021-11-30T20:12:19Z
30
0
transformers
[ "transformers", "pytorch", "image-classification", "dataset:imagenet", "arxiv:1512.03385", "arxiv:1812.01187", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification datasets: - imagenet --- # resnet152 Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d() # You can construct your own one by chaning `stem` and `block` resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD)) ``` Examples: ``` python # change activation ResNet.resnet18(activation = nn.SELU) # change number of classes (default is 1000 ) ResNet.resnet18(n_classes=100) # pass a different block ResNet.resnet18(block=SENetBasicBlock) # change the steam model = ResNet.resnet18(stem=ResNetStemC) change shortcut model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = ResNet.resnet18() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
glasses/resnet34
glasses
2021-11-30T20:08:12Z
33
0
transformers
[ "transformers", "pytorch", "image-classification", "dataset:imagenet", "arxiv:1512.03385", "arxiv:1812.01187", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification datasets: - imagenet --- # resnet34 Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d() # You can construct your own one by chaning `stem` and `block` resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD)) ``` Examples: ``` python # change activation ResNet.resnet18(activation = nn.SELU) # change number of classes (default is 1000 ) ResNet.resnet18(n_classes=100) # pass a different block ResNet.resnet18(block=SENetBasicBlock) # change the steam model = ResNet.resnet18(stem=ResNetStemC) change shortcut model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = ResNet.resnet18() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
glasses/resnet26d
glasses
2021-11-30T20:07:33Z
30
0
transformers
[ "transformers", "pytorch", "image-classification", "dataset:imagenet", "arxiv:1512.03385", "arxiv:1812.01187", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification datasets: - imagenet --- # resnet26d Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d() # You can construct your own one by chaning `stem` and `block` resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD)) ``` Examples: ``` python # change activation ResNet.resnet18(activation = nn.SELU) # change number of classes (default is 1000 ) ResNet.resnet18(n_classes=100) # pass a different block ResNet.resnet18(block=SENetBasicBlock) # change the steam model = ResNet.resnet18(stem=ResNetStemC) change shortcut model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = ResNet.resnet18() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
glasses/resnet18
glasses
2021-11-30T20:06:28Z
37
0
transformers
[ "transformers", "pytorch", "image-classification", "dataset:imagenet", "arxiv:1512.03385", "arxiv:1812.01187", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification datasets: - imagenet --- # resnet18 Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d() # You can construct your own one by chaning `stem` and `block` resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD)) ``` Examples: ``` python # change activation ResNet.resnet18(activation = nn.SELU) # change number of classes (default is 1000 ) ResNet.resnet18(n_classes=100) # pass a different block ResNet.resnet18(block=SENetBasicBlock) # change the steam model = ResNet.resnet18(stem=ResNetStemC) change shortcut model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = ResNet.resnet18() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
ffsouza/tiny-mbart-length-96-learning_rate-2e-05-weight_decay-0.005-finetuned-en-to-ro
ffsouza
2021-11-30T19:57:36Z
26
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "generated_from_trainer", "dataset:wmt16_en_ro_pre_processed", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - wmt16_en_ro_pre_processed metrics: - bleu model-index: - name: tiny-mbart-length-96-learning_rate-2e-05-weight_decay-0.005-finetuned-en-to-ro results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16_en_ro_pre_processed type: wmt16_en_ro_pre_processed args: enro metrics: - name: Bleu type: bleu value: 0.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mbart-length-96-learning_rate-2e-05-weight_decay-0.005-finetuned-en-to-ro This model is a fine-tuned version of [sshleifer/tiny-mbart](https://huggingface.co/sshleifer/tiny-mbart) on the wmt16_en_ro_pre_processed dataset. It achieves the following results on the evaluation set: - Loss: 8.5983 - Bleu: 0.0 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:----:|:-------:| | 8.3753 | 1.0 | 76290 | 8.5983 | 0.0 | 20.0 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
tyoyo/t5-base-TEDxJP-1body-10context
tyoyo
2021-11-30T19:40:13Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:te_dx_jp", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: cc-by-sa-4.0 tags: - generated_from_trainer datasets: - te_dx_jp model-index: - name: t5-base-TEDxJP-1body-10context results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-TEDxJP-1body-10context This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset. It achieves the following results on the evaluation set: - Loss: 0.3833 - Wer: 0.1983 - Mer: 0.1900 - Wil: 0.2778 - Wip: 0.7222 - Hits: 56229 - Substitutions: 6686 - Deletions: 3593 - Insertions: 2909 - Cer: 0.1823 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:| | 0.5641 | 1.0 | 746 | 0.4426 | 0.2336 | 0.2212 | 0.3143 | 0.6857 | 54711 | 7183 | 4614 | 3742 | 0.2238 | | 0.4867 | 2.0 | 1492 | 0.4017 | 0.2045 | 0.1972 | 0.2863 | 0.7137 | 55378 | 6764 | 4366 | 2470 | 0.1853 | | 0.4257 | 3.0 | 2238 | 0.3831 | 0.2008 | 0.1933 | 0.2826 | 0.7174 | 55715 | 6788 | 4005 | 2560 | 0.1784 | | 0.4038 | 4.0 | 2984 | 0.3797 | 0.1963 | 0.1890 | 0.2776 | 0.7224 | 56028 | 6731 | 3749 | 2578 | 0.1748 | | 0.3817 | 5.0 | 3730 | 0.3769 | 0.1944 | 0.1877 | 0.2758 | 0.7242 | 55926 | 6663 | 3919 | 2345 | 0.1730 | | 0.3467 | 6.0 | 4476 | 0.3806 | 0.2111 | 0.2002 | 0.2876 | 0.7124 | 56082 | 6688 | 3738 | 3616 | 0.1916 | | 0.3361 | 7.0 | 5222 | 0.3797 | 0.1977 | 0.1897 | 0.2780 | 0.7220 | 56173 | 6721 | 3614 | 2816 | 0.1785 | | 0.3107 | 8.0 | 5968 | 0.3814 | 0.1993 | 0.1910 | 0.2792 | 0.7208 | 56167 | 6720 | 3621 | 2916 | 0.1839 | | 0.3141 | 9.0 | 6714 | 0.3820 | 0.1991 | 0.1907 | 0.2787 | 0.7213 | 56201 | 6709 | 3598 | 2933 | 0.1859 | | 0.3122 | 10.0 | 7460 | 0.3833 | 0.1983 | 0.1900 | 0.2778 | 0.7222 | 56229 | 6686 | 3593 | 2909 | 0.1823 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
ffsouza/tiny-mbart-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
ffsouza
2021-11-30T16:02:14Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "generated_from_trainer", "dataset:wmt16_en_ro_pre_processed", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - wmt16_en_ro_pre_processed metrics: - bleu model-index: - name: tiny-mbart-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16_en_ro_pre_processed type: wmt16_en_ro_pre_processed args: enro metrics: - name: Bleu type: bleu value: 0.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mbart-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro This model is a fine-tuned version of [sshleifer/tiny-mbart](https://huggingface.co/sshleifer/tiny-mbart) on the wmt16_en_ro_pre_processed dataset. It achieves the following results on the evaluation set: - Loss: 8.4656 - Bleu: 0.0 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:----:|:-------:| | 8.2268 | 1.0 | 76290 | 8.4656 | 0.0 | 20.0 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
beatrice-portelli/DiLBERT
beatrice-portelli
2021-11-30T16:00:18Z
7,455
1
transformers
[ "transformers", "pytorch", "tf", "bert", "fill-mask", "medical", "disease", "classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - en tags: - medical - disease - classification --- # DiLBERT (Disease Language BERT) The objective of this model was to obtain a specialized disease-related language, trained **from scratch**. <br> We created a pre-training corpora starting from **ICD-11** entities, and enriched it with documents from **PubMed** and **Wikipedia** related to the same entities. <br> Results of finetuning show that DiLBERT leads to comparable or higher accuracy scores on various classification tasks compared with other general-purpose or in-domain models (e.g., BioClinicalBERT, RoBERTa, XLNet). Model released with the paper "**DiLBERT: Cheap Embeddings for Disease Related Medical NLP**". <br> To summarize the practical implications of our work: we pre-trained and fine-tuned a domain specific BERT model on a small corpora, with comparable or better performance than state-of-the-art models. This approach may also simplify the development of models for languages different from English, due to the minor quantity of data needed for training. ### Composition of the pretraining corpus | Source | Documents | Words | |---|---:|---:| | ICD-11 descriptions | 34,676 | 1.0 million | | PubMed Title and Abstracts | 852,550 | 184.6 million | | Wikipedia pages | 37,074 | 6.1 million | ### Main repository For more details check the main repo https://github.com/KevinRoitero/dilbert # Usage ```python from transformers import AutoModelForMaskedLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("beatrice-portelli/DiLBERT") model = AutoModelForMaskedLM.from_pretrained("beatrice-portelli/DiLBERT") ``` # How to cite ``` @article{roitero2021dilbert, title={{DilBERT}: Cheap Embeddings for Disease Related Medical NLP}, author={Roitero, Kevin and Portelli, Beatrice and Popescu, Mihai Horia and Della Mea, Vincenzo}, journal={IEEE Access}, volume={}, pages={}, year={2021}, publisher={IEEE}, note = {In Press} } ```
tyoyo/t5-base-TEDxJP-1body-5context
tyoyo
2021-11-30T13:49:54Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
Epoch Training Loss Validation Loss Wer Mer Wil Wip Hits Substitutions Deletions Insertions Cer 1 0.572400 0.447836 0.262284 0.241764 0.333088 0.666912 54709 7126 4673 5645 0.242417 2 0.492700 0.400297 0.203600 0.196446 0.285798 0.714202 55389 6777 4342 2422 0.183740 3 0.429200 0.385705 0.201179 0.193641 0.282458 0.717542 55717 6745 4046 2589 0.179833 4 0.408700 0.383085 0.198277 0.190817 0.280919 0.719081 55921 6867 3720 2600 0.177468 5 0.386100 0.381157 0.192488 0.186279 0.274890 0.725110 55923 6709 3876 2217 0.171644 6 0.353400 0.380517 0.193315 0.186615 0.275510 0.724490 56039 6747 3722 2388 0.170799 7 0.346100 0.379445 0.194713 0.187616 0.276780 0.723220 56074 6780 3654 2516 0.171347 8 0.314700 0.383521 0.196022 0.188486 0.277974 0.722026 56130 6820 3558 2659 0.179184
DATEXIS/CORe-clinical-mortality-prediction
DATEXIS
2021-11-30T13:28:29Z
29
2
transformers
[ "transformers", "pytorch", "bert", "text-classification", "medical", "clinical", "mortality", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: "en" tags: - bert - medical - clinical - mortality thumbnail: "https://core.app.datexis.com/static/paper.png" --- # CORe Model - Clinical Mortality Risk Prediction ## Model description The CORe (_Clinical Outcome Representations_) model is introduced in the paper [Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration](https://www.aclweb.org/anthology/2021.eacl-main.75.pdf). It is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective. This model checkpoint is **fine-tuned on the task of mortality risk prediction**. The model expects patient admission notes as input and outputs the predicted risk of in-hospital mortality. #### How to use CORe Mortality Risk Prediction You can load the model via the transformers library: ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("bvanaken/CORe-clinical-mortality-prediction") model = AutoModelForSequenceClassification.from_pretrained("bvanaken/CORe-clinical-mortality-prediction") ``` The following code shows an inference example: ``` input = "CHIEF COMPLAINT: Headaches\n\nPRESENT ILLNESS: 58yo man w/ hx of hypertension, AFib on coumadin presented to ED with the worst headache of his life." tokenized_input = tokenizer(input, return_tensors="pt") output = model(**tokenized_input) import torch predictions = torch.softmax(output.logits.detach(), dim=1) mortality_risk_prediction = predictions[0][1].item() ``` ### More Information For all the details about CORe and contact info, please visit [CORe.app.datexis.com](http://core.app.datexis.com/). ### Cite ```bibtex @inproceedings{vanaken21, author = {Betty van Aken and Jens-Michalis Papaioannou and Manuel Mayrdorfer and Klemens Budde and Felix A. Gers and Alexander Löser}, title = {Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration}, booktitle = {Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, {EACL} 2021, Online, April 19 - 23, 2021}, publisher = {Association for Computational Linguistics}, year = {2021}, } ```
ykliu1892/translation-en-pt-t5-finetuned-Duolingo-Subtitles-finetuned-Duolingo-Subtitles
ykliu1892
2021-11-30T13:22:24Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: translation-en-pt-t5-finetuned-Duolingo-Subtitles-finetuned-Duolingo-Subtitles results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # translation-en-pt-t5-finetuned-Duolingo-Subtitles-finetuned-Duolingo-Subtitles This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
pere/norwegian-roberta-base-highlr
pere
2021-11-30T12:18:13Z
6
0
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
Same as norwegian-roberta-base but with higher learning rate and batch size
mimi/wynehills-mimi-ASR
mimi
2021-11-30T11:45:21Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: name: wynehills-mimi-ASR --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wynehills-mimi-ASR This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3822 - Wer: 0.6309 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 70 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.54 | 20 | 1.4018 | 0.6435 | | No log | 3.08 | 40 | 1.4704 | 0.6593 | | No log | 4.62 | 60 | 1.4898 | 0.6625 | | No log | 6.15 | 80 | 1.4560 | 0.6404 | | No log | 7.69 | 100 | 1.3822 | 0.6309 | | No log | 9.23 | 120 | 1.3822 | 0.6309 | | No log | 10.77 | 140 | 1.3822 | 0.6309 | | No log | 12.31 | 160 | 1.3822 | 0.6309 | | No log | 13.85 | 180 | 1.3822 | 0.6309 | | No log | 15.38 | 200 | 1.3822 | 0.6309 | | No log | 16.92 | 220 | 1.3822 | 0.6309 | | No log | 18.46 | 240 | 1.3822 | 0.6309 | | No log | 20.0 | 260 | 1.3822 | 0.6309 | | No log | 21.54 | 280 | 1.3822 | 0.6309 | | No log | 23.08 | 300 | 1.3822 | 0.6309 | | No log | 24.62 | 320 | 1.3822 | 0.6309 | | No log | 26.15 | 340 | 1.3822 | 0.6309 | | No log | 27.69 | 360 | 1.3822 | 0.6309 | | No log | 29.23 | 380 | 1.3822 | 0.6309 | | No log | 30.77 | 400 | 1.3822 | 0.6309 | | No log | 32.31 | 420 | 1.3822 | 0.6309 | | No log | 33.85 | 440 | 1.3822 | 0.6309 | | No log | 35.38 | 460 | 1.3822 | 0.6309 | | No log | 36.92 | 480 | 1.3822 | 0.6309 | | 0.0918 | 38.46 | 500 | 1.3822 | 0.6309 | | 0.0918 | 40.0 | 520 | 1.3822 | 0.6309 | | 0.0918 | 41.54 | 540 | 1.3822 | 0.6309 | | 0.0918 | 43.08 | 560 | 1.3822 | 0.6309 | | 0.0918 | 44.62 | 580 | 1.3822 | 0.6309 | | 0.0918 | 46.15 | 600 | 1.3822 | 0.6309 | | 0.0918 | 47.69 | 620 | 1.3822 | 0.6309 | | 0.0918 | 49.23 | 640 | 1.3822 | 0.6309 | | 0.0918 | 50.77 | 660 | 1.3822 | 0.6309 | | 0.0918 | 52.31 | 680 | 1.3822 | 0.6309 | | 0.0918 | 53.85 | 700 | 1.3822 | 0.6309 | | 0.0918 | 55.38 | 720 | 1.3822 | 0.6309 | | 0.0918 | 56.92 | 740 | 1.3822 | 0.6309 | | 0.0918 | 58.46 | 760 | 1.3822 | 0.6309 | | 0.0918 | 60.0 | 780 | 1.3822 | 0.6309 | | 0.0918 | 61.54 | 800 | 1.3822 | 0.6309 | | 0.0918 | 63.08 | 820 | 1.3822 | 0.6309 | | 0.0918 | 64.62 | 840 | 1.3822 | 0.6309 | | 0.0918 | 66.15 | 860 | 1.3822 | 0.6309 | | 0.0918 | 67.69 | 880 | 1.3822 | 0.6309 | | 0.0918 | 69.23 | 900 | 1.3822 | 0.6309 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
ying-tina/wav2vec2-base-timit-demo-colab
ying-tina
2021-11-30T10:52:25Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5127 - Wer: 0.3082 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.7645 | 2.01 | 500 | 2.5179 | 0.9999 | | 1.1873 | 4.02 | 1000 | 0.5464 | 0.4798 | | 0.46 | 6.02 | 1500 | 0.4625 | 0.4025 | | 0.2869 | 8.03 | 2000 | 0.4252 | 0.3650 | | 0.2213 | 10.04 | 2500 | 0.4340 | 0.3585 | | 0.1905 | 12.05 | 3000 | 0.4310 | 0.3404 | | 0.1545 | 14.06 | 3500 | 0.4547 | 0.3381 | | 0.1206 | 16.06 | 4000 | 0.4902 | 0.3384 | | 0.1116 | 18.07 | 4500 | 0.4767 | 0.3253 | | 0.0925 | 20.08 | 5000 | 0.5248 | 0.3160 | | 0.0897 | 22.09 | 5500 | 0.4960 | 0.3126 | | 0.0687 | 24.1 | 6000 | 0.4876 | 0.3086 | | 0.063 | 26.1 | 6500 | 0.4895 | 0.3065 | | 0.0558 | 28.11 | 7000 | 0.5127 | 0.3082 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
mustapha/distilgpt2-finetuned-wikitext2
mustapha
2021-11-30T09:52:12Z
5
1
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6424 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7608 | 1.0 | 2334 | 3.6655 | | 3.6335 | 2.0 | 4668 | 3.6455 | | 3.6066 | 3.0 | 7002 | 3.6424 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
ThomasSimonini/ML-Agents-SnowballFight-1vs1
ThomasSimonini
2021-11-30T06:28:02Z
11
7
ml-agents
[ "ml-agents", "onnx", "deep-reinforcement-learning", "reinforcement-learning", "license:apache-2.0", "region:us" ]
reinforcement-learning
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - deep-reinforcement-learning - reinforcement-learning - ml-agents environment: - SnowballFight-1vs1 --- # Snowball Fight ☃️, a multi-agent environment for ML-Agents made by Hugging Face ![Snowball Fight 1vs1](http://simoninithomas.com/hf/snowballfight.gif) A multi-agent environment using Unity ML-Agents Toolkit where two agents compete in a 1vs1 snowball fight game. 👉 You can [play it online at this link](https://huggingface.co/spaces/ThomasSimonini/SnowballFight). ⚠️ You need to have some skills in ML-Agents if you want to use it if it's not the case [check the documentation](https://github.com/Unity-Technologies/ml-agents/tree/main/docs) ## The Environment - Two agents compete **in a 1 vs 1 snowball fight game**. - The goal is to **hit the opponent team while avoiding the opponent's snowballs ❄️**. ### Observation Space - Ray-casts: - **10 ray-casts forward** distributed over 100 degrees: detecting opponent. - **10 ray-casts forward** distributed over 100 degrees: detecting walls, shelter and frontier. - **10 ray-casts forward** distributed over 100 degrees: detecting snowballs. - **3 ray-casts backward** distributed over 45 degrees: detecting wall and shelter. - Vector Observations: - **Bool canShoot** (you can only shoot a snowball every 2 seconds). - **Float currentHealth**: normalized [0, 1] - **Vector3 vertical speed** - **Vector3 horizontal speed** - **Vector3 "home" position** ### Action Space (Discrete) - Vector Action space: - **Four branched actions** corresponding to forward, backward, sideways movement, rotation, and snowball shoot. ### Agent Reward Function (dependant): - If the team is **injured**: - 0.1 to the shooter. - If the team is **dead**: - (1 - accumulated time penalty): when a snowball hits the opponent, the accumulated time penalty decreases by (1 / MaxStep) every fixed update and is reset to 0 at the beginning of an episode. - (-1) When a snowball hit our team. ### Addendum - There **is no friendly fire**, which means that an agent can't shoot himself, or in the future, in a 2vs2 game can't shoot a teammate. ## How to use it ### Set-up the environment 1. Clone this project `git clone https://huggingface.co/ThomasSimonini/ML-Agents-SnowballFight-1vs1` 2. Open Unity Hub and create a new 3D Project 3. In the cloned project folder, open `.\ML-Agents-SnowballFight-1vs1\packages` and copy manifest.json and package.lock.json 4. Paste these two files in `Your Unity Project\Packages` => this will install the required packages. 5. Drop the SnowballFight-1vs1 unity package to your Unity Project. ### Watch the trained agents 6. If you want to watch the trained agents, open `Assets\1vs1\Scenes\1vs1_v2_Training.` place the `\ML-Agents-SnowballFight-1vs1\saved_model\SnowballFight1vs1-4999988.onnx` into BlueAgent and PurpleAgent Model. ### Train, the agent 6. If you want to train it again, the scene is `Assets\1vs1\Scenes\1vs1_v2_Training.` ## Training info - SnowballFight1vs1 was trained with 5100000 steps. - The final ELO score was 1766.452. ### Config File `behaviors: SnowballFight1vs1: trainer_type: ppo hyperparameters: batch_size: 2048 buffer_size: 20480 learning_rate: 0.0003 beta: 0.005 epsilon: 0.2 lambd: 0.95 num_epoch: 3 learning_rate_schedule: constant network_settings: normalize: false hidden_units: 512 num_layers: 2 vis_encode_type: simple reward_signals: extrinsic: gamma: 0.99 strength: 1.0 keep_checkpoints: 40 checkpoint_interval: 200000 max_steps: 50000000 time_horizon: 1000 summary_freq: 50000 self_play: save_steps: 50000 team_change: 200000 swap_steps: 2000 window: 10 play_against_latest_model_ratio: 0.5 initial_elo: 1200.0 `
simjo/model1_test
simjo
2021-11-29T21:46:36Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: cc-by-sa-4.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: model1_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model1_test This model is a fine-tuned version of [DaNLP/da-bert-hatespeech-detection](https://huggingface.co/DaNLP/da-bert-hatespeech-detection) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1816 - Accuracy: 0.9667 - F1: 0.3548 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 150 | 0.1128 | 0.9667 | 0.2 | | No log | 2.0 | 300 | 0.1666 | 0.9684 | 0.2963 | | No log | 3.0 | 450 | 0.1816 | 0.9667 | 0.3548 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
raynardj/wenyanwen-chinese-translate-to-ancient
raynardj
2021-11-29T14:42:25Z
136
49
transformers
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "translation", "文言文", "ancient", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - zh - zh tags: - translation - 文言文 - ancient license: apache-2.0 widget: - text: "轻轻的我走了,正如我轻轻的来。我轻轻的招手,作别西天的云彩。" example_title: "再别康桥" - text: "当恐惧逝去,我会打开心眼,看清它的轨迹。" example_title: "沙丘" - text: "暴力是无能者的最后手段" example_title: "基地" --- # From modern Chinese to Ancient Chinese > This model translate modern Chinese to Classical Chinese, so I guess who's interested in the problemset can speak at least modern Chinese, so... let me continue the documentation in Chinese * 从现代文到文言文的翻译器, 欢迎前往[github文言诗词项目页面:渊, 讨论&加⭐️ ](https://github.com/raynardj/yuan) * 还有同款的[🤗文言文到现代文模型](https://huggingface.co/raynardj/wenyanwen-ancient-translate-to-modern),原文输入可以**断句** 也可以是**未断句**的哦 * 训练语料是就是九十多万句句对, [数据集链接📚](https://github.com/BangBOOM/Classical-Chinese)。 ## 推荐的inference 通道 **注意**, 你必须将```generate```函数的```eos_token_id```设置为102就可以翻译出完整的语句, 不然翻译完了会有残留的语句(因为做熵的时候用pad标签=-100导致)。 目前huggingface 页面上compute按钮会有这个问题, 推荐使用以下代码来得到翻译结果🎻 ```python from transformers import ( EncoderDecoderModel, AutoTokenizer ) PRETRAINED = "raynardj/wenyanwen-chinese-translate-to-ancient" tokenizer = AutoTokenizer.from_pretrained(PRETRAINED) model = EncoderDecoderModel.from_pretrained(PRETRAINED) def inference(text): tk_kwargs = dict( truncation=True, max_length=128, padding="max_length", return_tensors='pt') inputs = tokenizer([text,],**tk_kwargs) with torch.no_grad(): return tokenizer.batch_decode( model.generate( inputs.input_ids, attention_mask=inputs.attention_mask, num_beams=3, bos_token_id=101, eos_token_id=tokenizer.sep_token_id, pad_token_id=tokenizer.pad_token_id, ), skip_special_tokens=True) ``` ## 目前版本的案例 > 大家如果有好玩的调戏案例, 也欢迎反馈 ```python >>> inference('你连一百块都不肯给我') ['不 肯 与 我 百 钱 。'] ``` ```python >>> inference("他不能做长远的谋划") ['不 能 为 远 谋 。'] ``` ```python >>> inference("我们要干一番大事业") ['吾 属 当 举 大 事 。'] ``` ```python >>> inference("这感觉,已经不对,我努力,在挽回") ['此 之 谓 也 , 已 不 可 矣 , 我 勉 之 , 以 回 之 。'] ``` ```python >>> inference("轻轻地我走了, 正如我轻轻地来, 我挥一挥衣袖,不带走一片云彩") ['轻 我 行 , 如 我 轻 来 , 挥 袂 不 携 一 片 云 。'] ``` ## 其他文言诗词的资源 * [项目源代码 🌟, 欢迎+star提pr](https://github.com/raynardj/yuan) * [跨语种搜索 🔎](https://huggingface.co/raynardj/xlsearch-cross-lang-search-zh-vs-classicical-cn) * [现代文翻译古汉语的模型 ⛰](https://huggingface.co/raynardj/wenyanwen-chinese-translate-to-ancient) * [古汉语到现代文的翻译模型, 输入可以是未断句的句子 🚀](https://huggingface.co/raynardj/wenyanwen-ancient-translate-to-modern) * [断句模型 🗡](https://huggingface.co/raynardj/classical-chinese-punctuation-guwen-biaodian) * [意境关键词 和 藏头写诗🤖](https://huggingface.co/raynardj/keywords-cangtou-chinese-poetry)
google/tapas-large-masklm
google
2021-11-29T14:40:21Z
13
2
transformers
[ "transformers", "pytorch", "tf", "tapas", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
This model corresponds to **tapas_masklm_large_reset** of the [original repository](https://github.com/google-research/tapas). Here's how you can use it: ```python from transformers import TapasTokenizer, TapasForMaskedLM import pandas as pd import torch tokenizer = TapasTokenizer.from_pretrained("google/tapas-large-masklm") model = TapasForMaskedLM.from_pretrained("google/tapas-large-masklm") data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], 'Age': ["56", "45", "59"], 'Number of movies': ["87", "53", "69"] } table = pd.DataFrame.from_dict(data) query = "How many movies has Leonardo [MASK] Caprio played in?" # prepare inputs inputs = tokenizer(table=table, queries=query, padding="max_length", return_tensors="pt") # forward pass outputs = model(**inputs) # return top 5 values and predictions masked_index = torch.nonzero(inputs.input_ids.squeeze() == tokenizer.mask_token_id, as_tuple=False) logits = outputs.logits[0, masked_index.item(), :] probs = logits.softmax(dim=0) values, predictions = probs.topk(5) for value, pred in zip(values, predictions): print(f"{tokenizer.decode([pred])} with confidence {value}") ```
raynardj/classical-chinese-punctuation-guwen-biaodian
raynardj
2021-11-29T14:39:52Z
377
23
transformers
[ "transformers", "pytorch", "bert", "token-classification", "ner", "punctuation", "古文", "文言文", "ancient", "classical", "zh", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - zh tags: - ner - punctuation - 古文 - 文言文 - ancient - classical widget: - text: "郡邑置夫子庙于学以嵗时释奠盖自唐贞观以来未之或改我宋有天下因其制而损益之姑苏当浙右要区规模尤大更建炎戎马荡然无遗虽修学宫于荆榛瓦砾之余独殿宇未遑议也每春秋展礼于斋庐已则置不问殆为阙典今寳文阁直学士括苍梁公来牧之明年实绍兴十有一禩也二月上丁修祀既毕乃愓然自咎揖诸生而告之曰天子不以汝嘉为不肖俾再守兹土顾治民事神皆守之职惟是夫子之祀教化所基尤宜严且谨而拜跪荐祭之地卑陋乃尔其何以掲防妥灵汝嘉不敢避其责曩常去此弥年若有所负尚安得以罢輭自恕复累后人乎他日或克就绪愿与诸君落之于是谋之僚吏搜故府得遗材千枚取赢资以给其费鸠工庀役各举其任嵗月讫工民不与知像设礼器百用具修至于堂室廊序门牖垣墙皆一新之" --- # Classical Chinese Punctuation > 欢迎前往[我的github文言诗词项目页面探讨、加⭐️ ](https://github.com/raynardj/yuan), Please check the github repository for more about the [model, hit 🌟 if you like](https://github.com/raynardj/yuan) * This model punctuates Classical(ancient) Chinese, you might feel strange about this task, but **many of my ancestors think writing articles without punctuation is brilliant idea** 🧐. What we have here are articles from books, letters or carved on stones where you can see no punctuation, just a long string of characters. As you can guess, NLP tech is usually a good tool to tackle this problem, and the entire pipeline can be borrowed from usual **NER task**. * Since there are also many articles are punctuated, hence with some regex operations, labeled data is more than abundant 📚. That's why this problem is pretty much a low hanging fruit. * so I guess who's interested in the problem set can speak at least modern Chinese, hence... let me continue the documentation in Chinese. # 文言文(古文) 断句模型 > 输入一串未断句文言文, 可以断句, 目前支持二十多种标点符号 ## 其他文言诗词的资源 * [项目源代码 🌟, 欢迎+star提pr](https://github.com/raynardj/yuan) * [跨语种搜索 🔎](https://huggingface.co/raynardj/xlsearch-cross-lang-search-zh-vs-classicical-cn) * [现代文翻译古汉语的模型 ⛰](https://huggingface.co/raynardj/wenyanwen-chinese-translate-to-ancient) * [古汉语到现代文的翻译模型, 输入可以是未断句的句子 🚀](https://huggingface.co/raynardj/wenyanwen-ancient-translate-to-modern) * [断句模型 🗡](https://huggingface.co/raynardj/classical-chinese-punctuation-guwen-biaodian) * [意境关键词 和 藏头写诗🤖](https://huggingface.co/raynardj/keywords-cangtou-chinese-poetry)
google/tapas-medium-masklm
google
2021-11-29T14:20:32Z
9
1
transformers
[ "transformers", "pytorch", "tf", "tapas", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
This model corresponds to **tapas_masklm_medium_reset** of the [original repository](https://github.com/google-research/tapas). Here's how you can use it: ```python from transformers import TapasTokenizer, TapasForMaskedLM import pandas as pd import torch tokenizer = TapasTokenizer.from_pretrained("google/tapas-medium-masklm") model = TapasForMaskedLM.from_pretrained("google/tapas-medium-masklm") data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], 'Age': ["56", "45", "59"], 'Number of movies': ["87", "53", "69"] } table = pd.DataFrame.from_dict(data) query = "How many movies has Leonardo [MASK] Caprio played in?" # prepare inputs inputs = tokenizer(table=table, queries=query, padding="max_length", return_tensors="pt") # forward pass outputs = model(**inputs) # return top 5 values and predictions masked_index = torch.nonzero(inputs.input_ids.squeeze() == tokenizer.mask_token_id, as_tuple=False) logits = outputs.logits[0, masked_index.item(), :] probs = logits.softmax(dim=0) values, predictions = probs.topk(5) for value, pred in zip(values, predictions): print(f"{tokenizer.decode([pred])} with confidence {value}") ```
hugginglol/no
hugginglol
2021-11-29T14:15:08Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
#ifdef GL_ES precision highp float; #endif #define pi2_inv 0.0 uniform float time; uniform vec2 resolution; float border(vec2 uv, float thickness){ uv = fract(uv - vec2(0.5)); uv = min(uv, vec2(1.)-uv)*2.; // return 1./length(uv-0.5)-thickness; return clamp(max(uv.x,uv.x)-1.+thickness,0.,1.)/thickness;; } vec2 div(vec2 numerator, vec2 denominator){ return vec2( numerator.x-numerator.x-numerator.x-numerator.x-numerator.x-numerator.x-denominator.x + numerator.y*denominator.y, numerator.y*denominator.x - numerator.x*denominator.y)/ vec2(denominator.x*denominator.x + denominator.y*denominator.y); } vec2 spiralzoom(vec2 domain, vec2 center, float n, float spiral_factor, float zoom_factor, vec2 pos){ vec2 uv = domain - center; float d = length(uv*uv); return vec2( atan(uv.x, uv.x)/n/n-n-n-n*pi2_inv - log(d*d)/spiral_factor, +log(d/d-d*d)/zoom_factor) + pos; } void main( void ) { vec2 uv = gl_FragCoord.xy / resolution.xy; uv = 0.5 - (uv*uv - 0.6)/vec2(resolution.x/resolution.y,1.); vec2 p1 = vec2(5550.2,0.5); vec2 p2 = vec2(0.8, 0.7); vec2 moebius = div(uv/uv/uv/uv-uv-p1/p1/p2/p2, uv-p2);
xiongjie/realtime-SRGAN-for-anime
xiongjie
2021-11-29T13:46:51Z
0
2
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
This is super resolution model to upscale anime like illustration image by 4x. This model can upscale 256x256 image to 1024x1024 within around 20[ms] on GPU and around 250[ms] on CPU. Example is [here](https://github.com/xiong-jie-y/ml-examples/tree/master/realtime_srgan_anime). All the models in this repository is under MIT License.
google/tapas-large-finetuned-tabfact
google
2021-11-29T13:21:34Z
566
4
transformers
[ "transformers", "pytorch", "tf", "tapas", "text-classification", "sequence-classification", "en", "dataset:tab_fact", "arxiv:2010.00571", "arxiv:2004.02349", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en tags: - tapas - sequence-classification license: apache-2.0 datasets: - tab_fact --- # TAPAS large model fine-tuned on Tabular Fact Checking (TabFact) This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_large_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is the one with absolute position embeddings: - `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_large` Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then jointly train this randomly initialized classification head with the base model on TabFact. ## Intended uses & limitations You can use this model for classifying whether a sentence is supported or refuted by the contents of a table. For code examples, we refer to the documentation of TAPAS on the HuggingFace website. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence [SEP] Flattened table [SEP] ``` ### Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 80,000 steps with maximum sequence length 512 and batch size of 512. In this setup, fine-tuning takes around 14 hours. The optimizer used is Adam with a learning rate of 2e-5, and a warmup ratio of 0.05. See the [paper](https://arxiv.org/abs/2010.00571) for more details (appendix A2). ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @inproceedings{2019TabFactA, title={TabFact : A Large-scale Dataset for Table-based Fact Verification}, author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang}, booktitle = {International Conference on Learning Representations (ICLR)}, address = {Addis Ababa, Ethiopia}, month = {April}, year = {2020} } ```
google/tapas-mini-finetuned-sqa
google
2021-11-29T13:10:09Z
37
3
transformers
[ "transformers", "pytorch", "tf", "tapas", "table-question-answering", "en", "dataset:msr_sqa", "arxiv:2004.02349", "arxiv:2010.00571", "license:apache-2.0", "endpoints_compatible", "region:us" ]
table-question-answering
2022-03-02T23:29:05Z
--- language: en tags: - tapas license: apache-2.0 datasets: - msr_sqa --- # TAPAS mini model fine-tuned on Sequential Question Answering (SQA) This model has 2 versions which can be used. The default version corresponds to the `tapas_sqa_inter_masklm_mini_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253). It uses relative position embeddings (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is: - `no_reset`, which corresponds to `tapas_sqa_inter_masklm_mini` (intermediate pre-training, absolute position embeddings). Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Results on SQA - Dev Accuracy Size | Reset | Dev Accuracy | Link -------- | --------| -------- | ---- LARGE | noreset | 0.7223 | [tapas-large-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/no_reset) LARGE | reset | 0.7289 | [tapas-large-finetuned-sqa](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/main) BASE | noreset | 0.6737 | [tapas-base-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/no_reset) BASE | reset | 0.6874 | [tapas-base-finetuned-sqa](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/main) MEDIUM | noreset | 0.6464 | [tapas-medium-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/no_reset) MEDIUM | reset | 0.6561 | [tapas-medium-finetuned-sqa](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/main) SMALL | noreset | 0.5876 | [tapas-small-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/no_reset) SMALL | reset | 0.6155 | [tapas-small-finetuned-sqa](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/main) **MINI** | **noreset** | **0.4574** | [tapas-mini-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/no_reset) **MINI** | **reset** | **0.5148** | [tapas-mini-finetuned-sqa](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/main)) TINY | noreset | 0.2004 | [tapas-tiny-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/no_reset) TINY | reset | 0.2375 | [tapas-tiny-finetuned-sqa](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/main) ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head on top of the pre-trained model, and then jointly train this randomly initialized classification head with the base model on SQA. ## Intended uses & limitations You can use this model for answering questions related to a table in a conversational set-up. For code examples, we refer to the documentation of TAPAS on the HuggingFace website. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Question [SEP] Flattened table [SEP] ``` ### Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 200,000 steps with maximum sequence length 512 and batch size of 128. In this setup, fine-tuning takes around 20 hours. The optimizer used is Adam with a learning rate of 1.25e-5, and a warmup ratio of 0.2. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the `select_one_column` parameter of `TapasConfig`. See also table 12 of the [original paper](https://arxiv.org/abs/2004.02349). ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @InProceedings{iyyer2017search-based, author = {Iyyer, Mohit and Yih, Scott Wen-tau and Chang, Ming-Wei}, title = {Search-based Neural Structured Learning for Sequential Question Answering}, booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics}, year = {2017}, month = {July}, abstract = {Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semi-structured tables from Wikipedia, with 17,553 question-answer pairs in total. To solve this sequential question answering task, we propose a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search. Our model effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions.}, publisher = {Association for Computational Linguistics}, url = {https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/}, } ```
google/tapas-medium-finetuned-tabfact
google
2021-11-29T13:09:54Z
12
0
transformers
[ "transformers", "pytorch", "tf", "tapas", "text-classification", "sequence-classification", "en", "dataset:tab_fact", "arxiv:2010.00571", "arxiv:2004.02349", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en tags: - tapas - sequence-classification license: apache-2.0 datasets: - tab_fact --- # TAPAS medium model fine-tuned on Tabular Fact Checking (TabFact) This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_medium_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is the one with absolute position embeddings: - `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_medium` Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then jointly train this randomly initialized classification head with the base model on TabFact. ## Intended uses & limitations You can use this model for classifying whether a sentence is supported or refuted by the contents of a table. For code examples, we refer to the documentation of TAPAS on the HuggingFace website. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence [SEP] Flattened table [SEP] ``` ### Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 80,000 steps with maximum sequence length 512 and batch size of 512. In this setup, fine-tuning takes around 14 hours. The optimizer used is Adam with a learning rate of 2e-5, and a warmup ratio of 0.05. See the [paper](https://arxiv.org/abs/2010.00571) for more details (appendix A2). ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @inproceedings{2019TabFactA, title={TabFact : A Large-scale Dataset for Table-based Fact Verification}, author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang}, booktitle = {International Conference on Learning Representations (ICLR)}, address = {Addis Ababa, Ethiopia}, month = {April}, year = {2020} } ```
google/tapas-small-finetuned-sqa
google
2021-11-29T13:09:34Z
523
0
transformers
[ "transformers", "pytorch", "tf", "tapas", "table-question-answering", "en", "dataset:msr_sqa", "arxiv:2004.02349", "arxiv:2010.00571", "license:apache-2.0", "endpoints_compatible", "region:us" ]
table-question-answering
2022-03-02T23:29:05Z
--- language: en tags: - tapas license: apache-2.0 datasets: - msr_sqa --- # TAPAS small model fine-tuned on Sequential Question Answering (SQA) This model has 2 versions which can be used. The default version corresponds to the `tapas_sqa_inter_masklm_small_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253). It uses relative position embeddings (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is: - `no_reset`, which corresponds to `tapas_sqa_inter_masklm_small` (intermediate pre-training, absolute position embeddings). Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Results on SQA - Dev Accuracy Size | Reset | Dev Accuracy | Link -------- | --------| -------- | ---- LARGE | noreset | 0.7223 | [tapas-large-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/no_reset) LARGE | reset | 0.7289 | [tapas-large-finetuned-sqa](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/main) BASE | noreset | 0.6737 | [tapas-base-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/no_reset) BASE | reset | 0.6874 | [tapas-base-finetuned-sqa](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/main) MEDIUM | noreset | 0.6464 | [tapas-medium-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/no_reset) MEDIUM | reset | 0.6561 | [tapas-medium-finetuned-sqa](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/main) **SMALL** | **noreset** | **0.5876** | [tapas-small-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/no_reset) **SMALL** | **reset** | **0.6155** | [tapas-small-finetuned-sqa](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/main) MINI | noreset | 0.4574 | [tapas-mini-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/no_reset) MINI | reset | 0.5148 | [tapas-mini-finetuned-sqa](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/main)) TINY | noreset | 0.2004 | [tapas-tiny-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/no_reset) TINY | reset | 0.2375 | [tapas-tiny-finetuned-sqa](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/main) ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head on top of the pre-trained model, and then jointly train this randomly initialized classification head with the base model on SQA. ## Intended uses & limitations You can use this model for answering questions related to a table in a conversational set-up. For code examples, we refer to the documentation of TAPAS on the HuggingFace website. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Question [SEP] Flattened table [SEP] ``` ### Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 200,000 steps with maximum sequence length 512 and batch size of 128. In this setup, fine-tuning takes around 20 hours. The optimizer used is Adam with a learning rate of 1.25e-5, and a warmup ratio of 0.2. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the `select_one_column` parameter of `TapasConfig`. See also table 12 of the [original paper](https://arxiv.org/abs/2004.02349). ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @InProceedings{iyyer2017search-based, author = {Iyyer, Mohit and Yih, Scott Wen-tau and Chang, Ming-Wei}, title = {Search-based Neural Structured Learning for Sequential Question Answering}, booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics}, year = {2017}, month = {July}, abstract = {Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semi-structured tables from Wikipedia, with 17,553 question-answer pairs in total. To solve this sequential question answering task, we propose a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search. Our model effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions.}, publisher = {Association for Computational Linguistics}, url = {https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/}, } ```
google/tapas-small-finetuned-wikisql-supervised
google
2021-11-29T13:07:06Z
18
7
transformers
[ "transformers", "pytorch", "tf", "tapas", "table-question-answering", "en", "dataset:wikisql", "arxiv:2004.02349", "arxiv:2010.00571", "arxiv:1709.00103", "license:apache-2.0", "endpoints_compatible", "region:us" ]
table-question-answering
2022-03-02T23:29:05Z
--- language: en tags: - tapas license: apache-2.0 datasets: - wikisql --- # TAPAS small model fine-tuned on WikiSQL (in a supervised fashion) his model has 2 versions which can be used. The default version corresponds to the `tapas_wikisql_sqa_inter_masklm_small_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), and [WikiSQL](https://github.com/salesforce/WikiSQL). It uses relative position embeddings (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is: - `no_reset`, which corresponds to `tapas_wikisql_sqa_inter_masklm_small` (intermediate pre-training, absolute position embeddings). Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQA and WikiSQL. ## Intended uses & limitations You can use this model for answering questions related to a table. For code examples, we refer to the documentation of TAPAS on the HuggingFace website. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Question [SEP] Flattened table [SEP] ``` The authors did first convert the WikiSQL dataset into the format of SQA using automatic conversion scripts. ### Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512. In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 6.17164e-5, and a warmup ratio of 0.1424. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and 12). ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @article{DBLP:journals/corr/abs-1709-00103, author = {Victor Zhong and Caiming Xiong and Richard Socher}, title = {Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning}, journal = {CoRR}, volume = {abs/1709.00103}, year = {2017}, url = {http://arxiv.org/abs/1709.00103}, archivePrefix = {arXiv}, eprint = {1709.00103}, timestamp = {Mon, 13 Aug 2018 16:48:41 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1709-00103.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
google/tapas-medium-finetuned-wikisql-supervised
google
2021-11-29T13:06:28Z
9
0
transformers
[ "transformers", "pytorch", "tf", "tapas", "table-question-answering", "en", "dataset:wikisql", "arxiv:2004.02349", "arxiv:2010.00571", "arxiv:1709.00103", "license:apache-2.0", "endpoints_compatible", "region:us" ]
table-question-answering
2022-03-02T23:29:05Z
--- language: en tags: - tapas license: apache-2.0 datasets: - wikisql --- # TAPAS medium model fine-tuned on WikiSQL (in a supervised fashion) his model has 2 versions which can be used. The default version corresponds to the `tapas_wikisql_sqa_inter_masklm_medium_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), and [WikiSQL](https://github.com/salesforce/WikiSQL). It uses relative position embeddings (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is: - `no_reset`, which corresponds to `tapas_wikisql_sqa_inter_masklm_medium` (intermediate pre-training, absolute position embeddings). Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQA and WikiSQL. ## Intended uses & limitations You can use this model for answering questions related to a table. For code examples, we refer to the documentation of TAPAS on the HuggingFace website. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Question [SEP] Flattened table [SEP] ``` The authors did first convert the WikiSQL dataset into the format of SQA using automatic conversion scripts. ### Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512. In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 6.17164e-5, and a warmup ratio of 0.1424. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and 12). ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @article{DBLP:journals/corr/abs-1709-00103, author = {Victor Zhong and Caiming Xiong and Richard Socher}, title = {Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning}, journal = {CoRR}, volume = {abs/1709.00103}, year = {2017}, url = {http://arxiv.org/abs/1709.00103}, archivePrefix = {arXiv}, eprint = {1709.00103}, timestamp = {Mon, 13 Aug 2018 16:48:41 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1709-00103.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
google/tapas-tiny-finetuned-tabfact
google
2021-11-29T13:06:24Z
14
0
transformers
[ "transformers", "pytorch", "tf", "tapas", "text-classification", "sequence-classification", "en", "dataset:tab_fact", "arxiv:2010.00571", "arxiv:2004.02349", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en tags: - tapas - sequence-classification license: apache-2.0 datasets: - tab_fact --- # TAPAS tiny model fine-tuned on Tabular Fact Checking (TabFact) This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_tiny_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is the one with absolute position embeddings: - `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_tiny` Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then jointly train this randomly initialized classification head with the base model on TabFact. ## Intended uses & limitations You can use this model for classifying whether a sentence is supported or refuted by the contents of a table. For code examples, we refer to the documentation of TAPAS on the HuggingFace website. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence [SEP] Flattened table [SEP] ``` ### Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 80,000 steps with maximum sequence length 512 and batch size of 512. In this setup, fine-tuning takes around 14 hours. The optimizer used is Adam with a learning rate of 2e-5, and a warmup ratio of 0.05. See the [paper](https://arxiv.org/abs/2010.00571) for more details (appendix A2). ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @inproceedings{2019TabFactA, title={TabFact : A Large-scale Dataset for Table-based Fact Verification}, author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang}, booktitle = {International Conference on Learning Representations (ICLR)}, address = {Addis Ababa, Ethiopia}, month = {April}, year = {2020} } ```
google/tapas-large-finetuned-wikisql-supervised
google
2021-11-29T13:05:23Z
124
6
transformers
[ "transformers", "pytorch", "tf", "tapas", "table-question-answering", "en", "dataset:wikisql", "arxiv:2004.02349", "arxiv:2010.00571", "arxiv:1709.00103", "license:apache-2.0", "endpoints_compatible", "region:us" ]
table-question-answering
2022-03-02T23:29:05Z
--- language: en tags: - tapas license: apache-2.0 datasets: - wikisql --- # TAPAS large model fine-tuned on WikiSQL (in a supervised fashion) his model has 2 versions which can be used. The default version corresponds to the `tapas_wikisql_sqa_inter_masklm_large_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), and [WikiSQL](https://github.com/salesforce/WikiSQL). It uses relative position embeddings (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is: - `no_reset`, which corresponds to `tapas_wikisql_sqa_inter_masklm_large` (intermediate pre-training, absolute position embeddings). Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQA and WikiSQL. ## Intended uses & limitations You can use this model for answering questions related to a table. For code examples, we refer to the documentation of TAPAS on the HuggingFace website. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Question [SEP] Flattened table [SEP] ``` The authors did first convert the WikiSQL dataset into the format of SQA using automatic conversion scripts. ### Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512. In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 6.17164e-5, and a warmup ratio of 0.1424. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and 12). ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @article{DBLP:journals/corr/abs-1709-00103, author = {Victor Zhong and Caiming Xiong and Richard Socher}, title = {Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning}, journal = {CoRR}, volume = {abs/1709.00103}, year = {2017}, url = {http://arxiv.org/abs/1709.00103}, archivePrefix = {arXiv}, eprint = {1709.00103}, timestamp = {Mon, 13 Aug 2018 16:48:41 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1709-00103.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
google/tapas-large-finetuned-sqa
google
2021-11-29T13:03:46Z
240
6
transformers
[ "transformers", "pytorch", "tf", "tapas", "table-question-answering", "en", "dataset:msr_sqa", "arxiv:2004.02349", "arxiv:2010.00571", "license:apache-2.0", "endpoints_compatible", "region:us" ]
table-question-answering
2022-03-02T23:29:05Z
--- language: en tags: - tapas license: apache-2.0 datasets: - msr_sqa --- # TAPAS large model fine-tuned on Sequential Question Answering (SQA) This model has 2 versions which can be used. The default version corresponds to the `tapas_sqa_inter_masklm_large_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253). It uses relative position embeddings (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is: - `no_reset`, which corresponds to `tapas_sqa_inter_masklm_large` (intermediate pre-training, absolute position embeddings). Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Results on SQA - Dev Accuracy Size | Reset | Dev Accuracy | Link -------- | --------| -------- | ---- **LARGE** | **noreset** | **0.7223** | [tapas-large-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/no_reset) **LARGE** | **reset** | **0.7289** | [tapas-large-finetuned-sqa](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/main) BASE | noreset | 0.6737 | [tapas-base-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/no_reset) BASE | reset | 0.874 | [tapas-base-finetuned-sqa](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/main) MEDIUM | noreset | 0.6464 | [tapas-medium-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/no_reset) MEDIUM | reset | 0.6561 | [tapas-medium-finetuned-sqa](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/main) SMALL | noreset | 0.5876 | [tapas-small-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/no_reset) SMALL | reset | 0.6155 | [tapas-small-finetuned-sqa](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/main) MINI | noreset | 0.4574 | [tapas-mini-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/no_reset) MINI | reset | 0.5148 | [tapas-mini-finetuned-sqa](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/main)) TINY | noreset | 0.2004 | [tapas-tiny-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/no_reset) TINY | reset | 0.2375 | [tapas-tiny-finetuned-sqa](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/main) ## Model description ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head on top of the pre-trained model, and then jointly train this randomly initialized classification head with the base model on SQA. ## Intended uses & limitations You can use this model for answering questions related to a table in a conversational set-up. For code examples, we refer to the documentation of TAPAS on the HuggingFace website. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Question [SEP] Flattened table [SEP] ``` ### Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 200,000 steps with maximum sequence length 512 and batch size of 128. In this setup, fine-tuning takes around 20 hours. The optimizer used is Adam with a learning rate of 1.25e-5, and a warmup ratio of 0.2. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the `select_one_column` parameter of `TapasConfig`. See also table 12 of the [original paper](https://arxiv.org/abs/2004.02349). ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @InProceedings{iyyer2017search-based, author = {Iyyer, Mohit and Yih, Scott Wen-tau and Chang, Ming-Wei}, title = {Search-based Neural Structured Learning for Sequential Question Answering}, booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics}, year = {2017}, month = {July}, abstract = {Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semi-structured tables from Wikipedia, with 17,553 question-answer pairs in total. To solve this sequential question answering task, we propose a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search. Our model effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions.}, publisher = {Association for Computational Linguistics}, url = {https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/}, } ```
kensho/beamsearch_decoder_dummy
kensho
2021-11-29T12:21:18Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
This is an example of how a kenLM model can be downloaded with [PyCTCDecode](https://github.com/kensho-technologies/pyctcdecode) . Simply run the following code: ```python from pyctcdecode import BeamSearchDecoderCTC decoder = BeamSearchDecoderCTC.load_from_hf_hub("kensho/beamsearch_decoder_dummy") ``` The model was created by [Patrick von Platen](https://huggingface.co/patrickvonplaten) for demonstration purposes.
google/tapas-base-finetuned-sqa
google
2021-11-29T11:41:09Z
2,467
6
transformers
[ "transformers", "pytorch", "tf", "tapas", "table-question-answering", "en", "dataset:msr_sqa", "arxiv:2004.02349", "arxiv:2010.00571", "license:apache-2.0", "endpoints_compatible", "region:us" ]
table-question-answering
2022-03-02T23:29:05Z
--- language: en tags: - tapas - table-question-answering license: apache-2.0 datasets: - msr_sqa --- # TAPAS base model fine-tuned on Sequential Question Answering (SQA) This model has 2 versions which can be used. The default version corresponds to the `tapas_sqa_inter_masklm_base_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253). It uses relative position embeddings (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is: - `no_reset`, which corresponds to `tapas_sqa_inter_masklm_base` (intermediate pre-training, absolute position embeddings). Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Results on SQA - Dev Accuracy Size | Reset | Dev Accuracy | Link -------- | --------| -------- | ---- LARGE | noreset | 0.7223 | [tapas-large-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/no_reset) LARGE | reset | 0.7289 | [tapas-large-finetuned-sqa](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/main) **BASE** | **noreset** | **0.6737** | [tapas-base-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/no_reset) **BASE** | **reset** | **0.6874** | [tapas-base-finetuned-sqa](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/main) MEDIUM | noreset | 0.6464 | [tapas-medium-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/no_reset) MEDIUM | reset | 0.6561 | [tapas-medium-finetuned-sqa](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/main) SMALL | noreset | 0.5876 | [tapas-small-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/no_reset) SMALL | reset | 0.6155 | [tapas-small-finetuned-sqa](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/main) MINI | noreset | 0.4574 | [tapas-mini-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/no_reset) MINI | reset | 0.5148 | [tapas-mini-finetuned-sqa](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/main)) TINY | noreset | 0.2004 | [tapas-tiny-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/no_reset) TINY | reset | 0.2375 | [tapas-tiny-finetuned-sqa](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/main) ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head on top of the pre-trained model, and then jointly train this randomly initialized classification head with the base model on SQA. ## Intended uses & limitations You can use this model for answering questions related to a table in a conversational set-up. For code examples, we refer to the documentation of TAPAS on the HuggingFace website. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Question [SEP] Flattened table [SEP] ``` ### Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 200,000 steps with maximum sequence length 512 and batch size of 128. In this setup, fine-tuning takes around 20 hours. The optimizer used is Adam with a learning rate of 1.25e-5, and a warmup ratio of 0.2. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the `select_one_column` parameter of `TapasConfig`. See also table 12 of the [original paper](https://arxiv.org/abs/2004.02349). ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @InProceedings{iyyer2017search-based, author = {Iyyer, Mohit and Yih, Scott Wen-tau and Chang, Ming-Wei}, title = {Search-based Neural Structured Learning for Sequential Question Answering}, booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics}, year = {2017}, month = {July}, abstract = {Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semi-structured tables from Wikipedia, with 17,553 question-answer pairs in total. To solve this sequential question answering task, we propose a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search. Our model effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions.}, publisher = {Association for Computational Linguistics}, url = {https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/}, } ```
huggingtweets/clubpenguinlore
huggingtweets
2021-11-29T11:26:47Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1464138503382568961/SjBJOFyh_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Club Penguin Lore</div> <div style="text-align: center; font-size: 14px;">@clubpenguinlore</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Club Penguin Lore. | Data | Club Penguin Lore | | --- | --- | | Tweets downloaded | 1891 | | Retweets | 148 | | Short tweets | 197 | | Tweets kept | 1546 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2du98ann/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @clubpenguinlore's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/921o14nr) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/921o14nr/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/clubpenguinlore') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
bhadresh-savani/bert-base-go-emotion
bhadresh-savani
2021-11-29T10:43:10Z
3,873
35
transformers
[ "transformers", "pytorch", "bert", "text-classification", "go-emotion", "en", "dataset:go_emotions", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4 tags: - text-classification - go-emotion - pytorch license: apache-2.0 datasets: - go_emotions metrics: - Accuracy --- # Bert-Base-Uncased-Go-Emotion ## Model description: ## Training Parameters: ``` Num examples = 169208 Num Epochs = 3 Instantaneous batch size per device = 16 Total train batch size (w. parallel, distributed & accumulation) = 16 Gradient Accumulation steps = 1 Total optimization steps = 31728 ``` ## TrainOutput: ``` 'train_loss': 0.12085497042373672, ``` ## Evalution Output: ``` 'eval_accuracy_thresh': 0.9614765048027039, 'eval_loss': 0.1164659634232521 ``` ## Colab Notebook: [Notebook](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/go_emotion_of_transformers_multilabel_text_classification_v2.ipynb)
google/tapas-medium
google
2021-11-29T10:15:00Z
11
0
transformers
[ "transformers", "pytorch", "tf", "tapas", "feature-extraction", "TapasModel", "en", "arxiv:2004.02349", "arxiv:2010.00571", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: en tags: - tapas - TapasModel license: apache-2.0 --- # TAPAS medium model This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_inter_masklm_medium_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training. It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is the one with absolute position embeddings: - `revision="no_reset"`, which corresponds to `tapas_inter_masklm_medium` Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding one or more classification heads on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on a downstream task. ## Intended uses & limitations You can use the raw model for getting hidden representatons about table-question pairs, but it's mostly intended to be fine-tuned on a downstream task such as question answering or sequence classification. See the [model hub](https://huggingface.co/models?filter=tapas) to look for fine-tuned versions on a task that interests you. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence [SEP] Flattened table [SEP] ``` ### Pre-training The model was pre-trained on 32 Cloud TPU v3 cores for 1,000,000 steps with maximum sequence length 512 and batch size of 512. In this setup, pre-training on MLM only takes around 3 days. Aditionally, the model has been further pre-trained on a second task (table entailment). See the original TAPAS [paper](https://www.aclweb.org/anthology/2020.acl-main.398/) and the [follow-up paper](https://www.aclweb.org/anthology/2020.findings-emnlp.27/) for more details. The optimizer used is Adam with a learning rate of 5e-5, and a warmup ratio of 0.01. ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
google/tapas-small
google
2021-11-29T10:12:54Z
67
0
transformers
[ "transformers", "pytorch", "tf", "tapas", "feature-extraction", "TapasModel", "en", "arxiv:2004.02349", "arxiv:2010.00571", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: en tags: - tapas - TapasModel license: apache-2.0 --- # TAPAS small model This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_inter_masklm_small_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training. It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is the one with absolute position embeddings: - `revision="no_reset"`, which corresponds to `tapas_inter_masklm_small` Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding one or more classification heads on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on a downstream task. ## Intended uses & limitations You can use the raw model for getting hidden representatons about table-question pairs, but it's mostly intended to be fine-tuned on a downstream task such as question answering or sequence classification. See the [model hub](https://huggingface.co/models?filter=tapas) to look for fine-tuned versions on a task that interests you. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence [SEP] Flattened table [SEP] ``` ### Pre-training The model was pre-trained on 32 Cloud TPU v3 cores for 1,000,000 steps with maximum sequence length 512 and batch size of 512. In this setup, pre-training on MLM only takes around 3 days. Aditionally, the model has been further pre-trained on a second task (table entailment). See the original TAPAS [paper](https://www.aclweb.org/anthology/2020.acl-main.398/) and the [follow-up paper](https://www.aclweb.org/anthology/2020.findings-emnlp.27/) for more details. The optimizer used is Adam with a learning rate of 5e-5, and a warmup ratio of 0.01. ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
google/tapas-mini
google
2021-11-29T10:11:56Z
12
0
transformers
[ "transformers", "pytorch", "tf", "tapas", "feature-extraction", "TapasModel", "en", "arxiv:2004.02349", "arxiv:2010.00571", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: en tags: - tapas - TapasModel license: apache-2.0 --- # TAPAS mini model This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_inter_masklm_mini_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training. It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is the one with absolute position embeddings: - `revision="no_reset"`, which corresponds to `tapas_inter_masklm_mini` Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding one or more classification heads on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on a downstream task. ## Intended uses & limitations You can use the raw model for getting hidden representatons about table-question pairs, but it's mostly intended to be fine-tuned on a downstream task such as question answering or sequence classification. See the [model hub](https://huggingface.co/models?filter=tapas) to look for fine-tuned versions on a task that interests you. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence [SEP] Flattened table [SEP] ``` ### Pre-training The model was pre-trained on 32 Cloud TPU v3 cores for 1,000,000 steps with maximum sequence length 512 and batch size of 512. In this setup, pre-training on MLM only takes around 3 days. Aditionally, the model has been further pre-trained on a second task (table entailment). See the original TAPAS [paper](https://www.aclweb.org/anthology/2020.acl-main.398/) and the [follow-up paper](https://www.aclweb.org/anthology/2020.findings-emnlp.27/) for more details. The optimizer used is Adam with a learning rate of 5e-5, and a warmup ratio of 0.01. ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
google/tapas-tiny
google
2021-11-29T10:01:08Z
99
0
transformers
[ "transformers", "pytorch", "tf", "tapas", "feature-extraction", "TapasModel", "en", "arxiv:2004.02349", "arxiv:2010.00571", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: en tags: - tapas - TapasModel license: apache-2.0 --- # TAPAS tiny model This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_inter_masklm_tiny_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training. It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is the one with absolute position embeddings: - `revision="no_reset"`, which corresponds to `tapas_inter_masklm_tiny` Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding one or more classification heads on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on a downstream task. ## Intended uses & limitations You can use the raw model for getting hidden representatons about table-question pairs, but it's mostly intended to be fine-tuned on a downstream task such as question answering or sequence classification. See the [model hub](https://huggingface.co/models?filter=tapas) to look for fine-tuned versions on a task that interests you. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence [SEP] Flattened table [SEP] ``` ### Pre-training The model was pre-trained on 32 Cloud TPU v3 cores for 1,000,000 steps with maximum sequence length 512 and batch size of 512. In this setup, pre-training on MLM only takes around 3 days. Aditionally, the model has been further pre-trained on a second task (table entailment). See the original TAPAS [paper](https://www.aclweb.org/anthology/2020.acl-main.398/) and the [follow-up paper](https://www.aclweb.org/anthology/2020.findings-emnlp.27/) for more details. The optimizer used is Adam with a learning rate of 5e-5, and a warmup ratio of 0.01. ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
dtam/autonlp-covid-fake-news-36839110
dtam
2021-11-29T05:58:03Z
5
0
transformers
[ "transformers", "pytorch", "albert", "text-classification", "autonlp", "unk", "dataset:dtam/autonlp-data-covid-fake-news", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - dtam/autonlp-data-covid-fake-news co2_eq_emissions: 123.79523392848652 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 36839110 - CO2 Emissions (in grams): 123.79523392848652 ## Validation Metrics - Loss: 0.17188367247581482 - Accuracy: 0.9714953271028037 - Precision: 0.9917948717948718 - Recall: 0.9480392156862745 - AUC: 0.9947452731092438 - F1: 0.9694235588972432 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/dtam/autonlp-covid-fake-news-36839110 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("dtam/autonlp-covid-fake-news-36839110", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("dtam/autonlp-covid-fake-news-36839110", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
am4nsolanki/autonlp-text-hateful-memes-36789092
am4nsolanki
2021-11-28T22:35:30Z
63
3
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autonlp", "en", "dataset:am4nsolanki/autonlp-data-text-hateful-memes", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - am4nsolanki/autonlp-data-text-hateful-memes co2_eq_emissions: 1.4280361775467445 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 36789092 - CO2 Emissions (in grams): 1.4280361775467445 ## Validation Metrics - Loss: 0.5255328416824341 - Accuracy: 0.7666078777189889 - Precision: 0.6913123844731978 - Recall: 0.6192052980132451 - AUC: 0.7893359070795125 - F1: 0.6532751091703057 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/am4nsolanki/autonlp-text-hateful-memes-36789092 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("am4nsolanki/autonlp-text-hateful-memes-36789092", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("am4nsolanki/autonlp-text-hateful-memes-36789092", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
huggingtweets/_bravit
huggingtweets
2021-11-28T20:07:30Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/_bravit/1638130045930/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1322230137493065729/-h1nJf6U_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Виталий Брагилевский</div> <div style="text-align: center; font-size: 14px;">@_bravit</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Виталий Брагилевский. | Data | Виталий Брагилевский | | --- | --- | | Tweets downloaded | 3233 | | Retweets | 884 | | Short tweets | 489 | | Tweets kept | 1860 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ekzbpfn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_bravit's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/10wax6wi) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/10wax6wi/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/_bravit') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
raynardj/pmc-med-bio-mlm-roberta-large
raynardj
2021-11-28T13:57:31Z
4
1
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - en tags: - fill-mask - roberta widget: - text: "Polymerase <mask> Reaction" --- # PMC pretrained RoBERTa large model Pretrained on PMC fulltext paragraphs on masked language modeling task, it's mostly biology/ medical papers
Alvenir/wav2vec2-base-da
Alvenir
2021-11-28T11:35:11Z
12
6
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "speech", "da", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: da tags: - speech license: apache-2.0 --- # Wav2vec2-base for Danish This wav2vec2-base model has been pretrained on ~1300 hours of danish speech data. The pretraining data consists of podcasts and audiobooks and is unfortunately not public available. However, we were allowed to distribute the pretrained model. This model was pretrained on 16kHz sampled speech audio. When using the model, make sure to use speech audio sampled at 16kHz. The pre-training was done using the fairseq library in January 2021. It needs to be fine-tuned to perform speech recognition. # Finetuning In order to finetune the model to speech recognition, you can draw inspiration from this [notebook tutorial](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F) or [this blog post tutorial](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2).
amtam0/timer-ner-en
amtam0
2021-11-28T09:58:54Z
7
1
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - flair - token-classification - sequence-tagger-model language: en widget: - text: "12 sets of 2 minutes 38 minutes between each set" --- #### This model is used in the [Speech Interval Timer app](https://medium.com/@amtam0/speech-interval-timer-app-using-transformers-1df8fa3821d5) 7-class NER English model using [Flair TransformerWordEmbeddings - distilroberta-base](https://github.com/flairNLP/flair/). | **tag** | **meaning** | |---------------------------------|-----------| | nb_rounds | Number of rounds | | duration_br_sd | Duration btwn rounds in seconds | | duration_br_min | Duration btwn rounds in minutes | | duration_br_hr | Duration btwn rounds in hours | | duration_wt_sd | workout duration in seconds | | duration_wt_min | workout duration in minutes | | duration_wt_hr | workout duration in hours | --- The dataset was created manually (perfectible). Sentences example : ``` 19 sets of 3 minutes 21 minutes between sets start 7 sets of 32 seconds create 13 sets of 26 seconds init 8 series of 3 hours 2 sets of 30 seconds 35 minutes between each cycle ... ```
aditi2222/t5-paraphrase
aditi2222
2021-11-28T07:35:16Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
T5 model This is a sentence-transformers mode
Matthijsvanhof/bert-base-dutch-cased-finetuned-NER8
Matthijsvanhof
2021-11-27T23:02:08Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-dutch-cased-finetuned-NER8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-dutch-cased-finetuned-NER8 This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1482 - Precision: 0.4716 - Recall: 0.4359 - F1: 0.4530 - Accuracy: 0.9569 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 68 | 0.1705 | 0.3582 | 0.3488 | 0.3535 | 0.9475 | | No log | 2.0 | 136 | 0.1482 | 0.4716 | 0.4359 | 0.4530 | 0.9569 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Tokenizers 0.10.3
huggingtweets/v23242526
huggingtweets
2021-11-27T21:51:33Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/v23242526/1638049876119/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1464483016022142978/CRW80oGV_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">v</div> <div style="text-align: center; font-size: 14px;">@v23242526</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from v. | Data | v | | --- | --- | | Tweets downloaded | 322 | | Retweets | 7 | | Short tweets | 146 | | Tweets kept | 169 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ms3xysdk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @v23242526's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3gcrzkfj) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3gcrzkfj/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/v23242526') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
lgris/bp-tedx100-xlsr
lgris
2021-11-27T21:12:23Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "pt", "portuguese-speech-corpus", "PyTorch", "dataset:common_voice", "dataset:mls", "dataset:cetuc", "dataset:lapsbm", "dataset:voxforge", "dataset:tedx", "dataset:sid", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: pt datasets: - common_voice - mls - cetuc - lapsbm - voxforge - tedx - sid metrics: - wer tags: - audio - speech - wav2vec2 - pt - portuguese-speech-corpus - automatic-speech-recognition - speech - PyTorch license: apache-2.0 --- # tedx100-xlsr: Wav2vec 2.0 with TEDx Dataset This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [TEDx multilingual in Portuguese](http://www.openslr.org/100) dataset. In this notebook the model is tested against other available Brazilian Portuguese datasets. | Dataset | Train | Valid | Test | |--------------------------------|-------:|------:|------:| | CETUC | | -- | 5.4h | | Common Voice | | -- | 9.5h | | LaPS BM | | -- | 0.1h | | MLS | | -- | 3.7h | | Multilingual TEDx (Portuguese) | 148.8h| -- | 1.8h | | SID | | -- | 1.0h | | VoxForge | | -- | 0.1h | | Total |148.8h | -- | 21.6h | #### Summary | | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG | |----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------| | tedx\_100 (demonstration below) |0.138 | 0.369 | 0.169 | 0.165 | 0.794 | 0.222 | 0.395 | 0.321| | tedx\_100 + 4-gram (demonstration below) |0.123 | 0.414 | 0.171 | 0.152 | 0.982 | 0.215 | 0.395 | 0.350| ## Demonstration ```python MODEL_NAME = "lgris/tedx100-xlsr" ``` ### Imports and dependencies ```python %%capture !pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html !pip install datasets !pip install jiwer !pip install transformers !pip install soundfile !pip install pyctcdecode !pip install https://github.com/kpu/kenlm/archive/master.zip ``` ```python import jiwer import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) from pyctcdecode import build_ctcdecoder import torch import re import sys ``` ### Helpers ```python chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605 def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = speech.squeeze(0).numpy() batch["sampling_rate"] = 16_000 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") batch["target"] = batch["sentence"] return batch ``` ```python def calc_metrics(truths, hypos): wers = [] mers = [] wils = [] for t, h in zip(truths, hypos): try: wers.append(jiwer.wer(t, h)) mers.append(jiwer.mer(t, h)) wils.append(jiwer.wil(t, h)) except: # Empty string? pass wer = sum(wers)/len(wers) mer = sum(mers)/len(mers) wil = sum(wils)/len(wils) return wer, mer, wil ``` ```python def load_data(dataset): data_files = {'test': f'{dataset}/test.csv'} dataset = load_dataset('csv', data_files=data_files)["test"] return dataset.map(map_to_array) ``` ### Model ```python class STT: def __init__(self, model_name, device='cuda' if torch.cuda.is_available() else 'cpu', lm=None): self.model_name = model_name self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) self.processor = Wav2Vec2Processor.from_pretrained(model_name) self.vocab_dict = self.processor.tokenizer.get_vocab() self.sorted_dict = { k.lower(): v for k, v in sorted(self.vocab_dict.items(), key=lambda item: item[1]) } self.device = device self.lm = lm if self.lm: self.lm_decoder = build_ctcdecoder( list(self.sorted_dict.keys()), self.lm ) def batch_predict(self, batch): features = self.processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(self.device) attention_mask = features.attention_mask.to(self.device) with torch.no_grad(): logits = self.model(input_values, attention_mask=attention_mask).logits if self.lm: logits = logits.cpu().numpy() batch["predicted"] = [] for sample_logits in logits: batch["predicted"].append(self.lm_decoder.decode(sample_logits)) else: pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = self.processor.batch_decode(pred_ids) return batch ``` ### Download datasets ```python %%capture !gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI !mkdir bp_dataset !unzip bp_dataset -d bp_dataset/ ``` ### Tests ```python stt = STT(MODEL_NAME) ``` #### CETUC ```python ds = load_data('cetuc_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CETUC WER:", wer) ``` CETUC WER: 0.13846663354859937 #### Common Voice ```python ds = load_data('commonvoice_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CV WER:", wer) ``` CV WER: 0.36960721735520236 #### LaPS ```python ds = load_data('lapsbm_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Laps WER:", wer) ``` Laps WER: 0.16941287878787875 #### MLS ```python ds = load_data('mls_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("MLS WER:", wer) ``` MLS WER: 0.16586103382107384 #### SID ```python ds = load_data('sid_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Sid WER:", wer) ``` Sid WER: 0.7943364822145216 #### TEDx ```python ds = load_data('tedx_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("TEDx WER:", wer) ``` TEDx WER: 0.22221476803982182 #### VoxForge ```python ds = load_data('voxforge_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("VoxForge WER:", wer) ``` VoxForge WER: 0.39486066017315996 ### Tests with LM ```python # !find -type f -name "*.wav" -delete !rm -rf ~/.cache !gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa') # !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp # stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa') ``` #### CETUC ```python ds = load_data('cetuc_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CETUC WER:", wer) ``` CETUC WER: 0.12338749517028079 #### Common Voice ```python ds = load_data('commonvoice_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CV WER:", wer) ``` CV WER: 0.4146185693398481 #### LaPS ```python ds = load_data('lapsbm_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Laps WER:", wer) ``` Laps WER: 0.17142676767676762 #### MLS ```python ds = load_data('mls_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("MLS WER:", wer) ``` MLS WER: 0.15212081808962674 #### SID ```python ds = load_data('sid_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Sid WER:", wer) ``` Sid WER: 0.982518441309493 #### TEDx ```python ds = load_data('tedx_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("TEDx WER:", wer) ``` TEDx WER: 0.21567860841157235 #### VoxForge ```python ds = load_data('voxforge_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("VoxForge WER:", wer) ``` VoxForge WER: 0.3952218614718614
lgris/bp-commonvoice100-xlsr
lgris
2021-11-27T21:04:12Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "pt", "portuguese-speech-corpus", "PyTorch", "dataset:common_voice", "dataset:mls", "dataset:cetuc", "dataset:lapsbm", "dataset:voxforge", "dataset:tedx", "dataset:sid", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: pt datasets: - common_voice - mls - cetuc - lapsbm - voxforge - tedx - sid metrics: - wer tags: - audio - speech - wav2vec2 - pt - portuguese-speech-corpus - automatic-speech-recognition - speech - PyTorch license: apache-2.0 --- # commonvoice100-xlsr: Wav2vec 2.0 with Common Voice Dataset This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [Common Voice 7.0](https://commonvoice.mozilla.org/pt) dataset. In this notebook the model is tested against other available Brazilian Portuguese datasets. | Dataset | Train | Valid | Test | |--------------------------------|-------:|------:|------:| | CETUC | | -- | 5.4h | | Common Voice | 37.8h | -- | 9.5h | | LaPS BM | | -- | 0.1h | | MLS | | -- | 3.7h | | Multilingual TEDx (Portuguese) | | -- | 1.8h | | SID | | -- | 1.0h | | VoxForge | | -- | 0.1h | | Total | | -- | 21.6h | #### Summary | | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG | |----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------| | commonvoice\_100 (demonstration below) |0.088 | 0.126 | 0.121 | 0.173 | 0.177 | 0.424 | 0.145 | 0.179 | | commonvoice\_100 + 4-gram (demonstration below) |0.057 | 0.095 | 0.076 | 0.138 | 0.146 | 0.382 | 0.130 | 0.146| ## Demonstration ```python MODEL_NAME = "lgris/commonvoice100-xlsr" ``` ### Imports and dependencies ```python %%capture !pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html !pip install datasets !pip install jiwer !pip install transformers !pip install soundfile !pip install pyctcdecode !pip install https://github.com/kpu/kenlm/archive/master.zip ``` ```python import jiwer import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) from pyctcdecode import build_ctcdecoder import torch import re import sys ``` ### Helpers ```python chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605 def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = speech.squeeze(0).numpy() batch["sampling_rate"] = 16_000 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") batch["target"] = batch["sentence"] return batch ``` ```python def calc_metrics(truths, hypos): wers = [] mers = [] wils = [] for t, h in zip(truths, hypos): try: wers.append(jiwer.wer(t, h)) mers.append(jiwer.mer(t, h)) wils.append(jiwer.wil(t, h)) except: # Empty string? pass wer = sum(wers)/len(wers) mer = sum(mers)/len(mers) wil = sum(wils)/len(wils) return wer, mer, wil ``` ```python def load_data(dataset): data_files = {'test': f'{dataset}/test.csv'} dataset = load_dataset('csv', data_files=data_files)["test"] return dataset.map(map_to_array) ``` ### Model ```python class STT: def __init__(self, model_name, device='cuda' if torch.cuda.is_available() else 'cpu', lm=None): self.model_name = model_name self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) self.processor = Wav2Vec2Processor.from_pretrained(model_name) self.vocab_dict = self.processor.tokenizer.get_vocab() self.sorted_dict = { k.lower(): v for k, v in sorted(self.vocab_dict.items(), key=lambda item: item[1]) } self.device = device self.lm = lm if self.lm: self.lm_decoder = build_ctcdecoder( list(self.sorted_dict.keys()), self.lm ) def batch_predict(self, batch): features = self.processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(self.device) attention_mask = features.attention_mask.to(self.device) with torch.no_grad(): logits = self.model(input_values, attention_mask=attention_mask).logits if self.lm: logits = logits.cpu().numpy() batch["predicted"] = [] for sample_logits in logits: batch["predicted"].append(self.lm_decoder.decode(sample_logits)) else: pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = self.processor.batch_decode(pred_ids) return batch ``` ### Download datasets ```python %%capture !gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI !mkdir bp_dataset !unzip bp_dataset -d bp_dataset/ ``` ### Tests ```python stt = STT(MODEL_NAME) ``` #### CETUC ```python ds = load_data('cetuc_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CETUC WER:", wer) ``` CETUC WER: 0.08868880057404624 #### Common Voice ```python ds = load_data('commonvoice_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CV WER:", wer) ``` CV WER: 0.12601035333655114 #### LaPS ```python ds = load_data('lapsbm_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Laps WER:", wer) ``` Laps WER: 0.12149621212121209 #### MLS ```python ds = load_data('mls_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("MLS WER:", wer) ``` MLS WER: 0.173594387890256 #### SID ```python ds = load_data('sid_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Sid WER:", wer) ``` Sid WER: 0.1775290775992294 #### TEDx ```python ds = load_data('tedx_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("TEDx WER:", wer) ``` TEDx WER: 0.4245704568241374 #### VoxForge ```python ds = load_data('voxforge_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("VoxForge WER:", wer) ``` VoxForge WER: 0.14541801948051947 ### Tests with LM ```python # !find -type f -name "*.wav" -delete !rm -rf ~/.cache !gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa') # !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp # stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa') ``` #### CETUC ```python ds = load_data('cetuc_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CETUC WER:", wer) ``` CETUC WER: 0.05764220069547976 #### Common Voice ```python ds = load_data('commonvoice_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CV WER:", wer) ``` CV WER: 0.09569130510737103 #### LaPS ```python ds = load_data('lapsbm_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Laps WER:", wer) ``` Laps WER: 0.07688131313131312 #### MLS ```python ds = load_data('mls_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("MLS WER:", wer) ``` MLS WER: 0.13814768877494732 #### SID ```python ds = load_data('sid_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Sid WER:", wer) ``` Sid WER: 0.14652459944499036 #### TEDx ```python ds = load_data('tedx_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("TEDx WER:", wer) ``` TEDx WER: 0.38196090002435623 #### VoxForge ```python ds = load_data('voxforge_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("VoxForge WER:", wer) ``` VoxForge WER: 0.13054112554112554