Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
{}
deepshikharbhardwaj/model_name
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
baikal-nlp/dbert-eth2
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
{}
baikal-nlp/dbert-ner
null
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
``` from transformers import BertForSequenceClassification, BertTokenizer, TextClassificationPipeline model = BertForSequenceClassification.from_pretrained("deeq/dbert-sentiment") tokenizer = BertTokenizer.from_pretrained("deeq/dbert") nlp = TextClassificationPipeline(model=model, tokenizer=tokenizer) print(nlp("좋아요")) print(nlp("글쎄요")) ```
{}
baikal-nlp/dbert-sentiment
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
deeqBERT-base --- - model: bert-base - vocab: bert-wordpiece, 35k - version: latest
{"language": "ko", "datasets": ["kowiki", "news"]}
baikal-nlp/dbert
null
[ "transformers", "pytorch", "bert", "fill-mask", "ko", "dataset:kowiki", "dataset:news", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
deeqBERT5 --- - model: bert-base - vocab: deeqnlp 1.5, 50k - version: latest/3.5
{}
baikal-nlp/dbert5
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
deeqELECTRA-base --- - model: electra-base-generator - vocab: bert-wordpiece, 35k - version: beta, 1.71M
{"language": "ko", "datasets": ["kowiki", "news"]}
baikal-nlp/delectra-generator
null
[ "transformers", "pytorch", "electra", "fill-mask", "ko", "dataset:kowiki", "dataset:news", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
deeqELECTRA-base --- - model: electra-base-discriminator - vocab: bert-wordpiece, 35k - version: beta, 1.71M
{"language": "ko", "datasets": ["kowiki", "news"]}
baikal-nlp/delectra
null
[ "transformers", "pytorch", "electra", "pretraining", "ko", "dataset:kowiki", "dataset:news", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-amazon-reviews This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Datasets 1.9.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": [], "model_index": [{"name": "distilgpt2-finetuned-amazon-reviews", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]}
defex/distilgpt2-finetuned-amazon-reviews
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
defex/distilgpt2-movie-review-generation
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# german-qg-t5-drink600 This model is fine-tuned in question generation in German. The expected answer must be highlighted with &lt;hl> token. It is based on [german-qg-t5-quad](https://huggingface.co/dehio/german-qg-t5-quad) and further pre-trained on drink related questions. ## Task example #### Input generate question: Der Monk Sour Drink ist ein somit eine aromatische Überraschung, die sowohl &lt;hl>im Sommer wie auch zu Silvester&lt;hl> funktioniert. #### Expected Question Zu welchen Gelegenheiten passt der Monk Sour gut? ## Model description The model is based on [german-qg-t5-quad](https://huggingface.co/dehio/german-qg-t5-quad), which was pre-trained on [GermanQUAD](https://www.deepset.ai/germanquad). We further pre-trained it on questions annotated on drink receipts from [Mixology](https://mixology.eu/) ("drink600"). We have not yet open sourced the dataset, since we do not own copyright on the source material. ## Training and evaluation data The training script can be accessed [here](https://github.com/d-e-h-i-o/german-qg). ## Evaluation It achieves a **BLEU-4 score of 29.80** on the drink600 test set (n=120) and **11.30** on the GermanQUAD test set. Thus, fine-tuning on drink600 did not affect performance on GermanQuAD. In comparison, *german-qg-t5-quad* achieves a BLEU-4 score of **10.76** on the drink600 test set. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 100 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
{"language": ["de"], "license": "mit", "tags": ["question generation"], "datasets": ["deepset/germanquad"], "widget": [{"text": "generate question: Der Monk Sour Drink ist ein somit eine aromatische \u00dcberraschung, die sowohl <hl>im Sommer wie auch zu Silvester<hl> funktioniert."}], "model-index": [{"name": "german-qg-t5-drink600", "results": []}]}
dehio/german-qg-t5-drink600
null
[ "transformers", "pytorch", "t5", "text2text-generation", "question generation", "de", "dataset:deepset/germanquad", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # german-qg-t5-e2e-quad (Work in progress) This model is a end-to-end question generation model in German. Given a text, it generates several questions about it. This model is a fine-tuned version of [valhalla/t5-base-e2e-qg](https://huggingface.co/valhalla/t5-base-e2e-qg) on the [GermanQuAD dataset from deepset](https://huggingface.co/datasets/deepset/germanquad). ## Model description More information needed ## Training and evaluation data Bleu_1: 0.196051 Bleu_2: 0.122380 Bleu_3: 0.079980 Bleu_4: 0.053672 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"language": ["de"], "license": "mit", "tags": ["question generation"], "datasets": ["deepset/germanquad"], "widget": [{"text": "Naturschutzwarte haben auf der ostfriesischen Insel Wangerooge zwei seltene Kurzschn\u00e4uzige Seepferdchen entdeckt. Die Tiere seien vergangene Woche bei einer sogenannten Sp\u00fclsaumkontrolle entdeckt worden, bei der die Str\u00e4nde eigentlich nach M\u00fcll und toten V\u00f6geln abgesucht w\u00fcrden, sagte der Gesch\u00e4ftsf\u00fchrer der zust\u00e4ndigen Naturschutz- und Forschungsgemeinschaft Mellumrat, Mathias Heckroth. Dabei seien den Natursch\u00fctzern am Nordstrand kurz hintereinander die beiden leblosen, nur wenige Zentimeter gro\u00dfen Tiere aufgefallen. Experten der Nationalparkverwaltung bestimmten beide Tiere als Kurzschn\u00e4uzige Seepferdchen (Hippocampus hippocampus)."}], "inference": {"parameters": {"max_length": 128}}, "model-index": [{"name": "german-qg-t5-e2e-quad", "results": []}]}
dehio/german-qg-t5-e2e-quad
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "question generation", "de", "dataset:deepset/germanquad", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# german-qg-t5-quad This model is fine-tuned in question generation in German. The expected answer must be highlighted with a &lt;hl> token. ## Task example #### Input generate question: Obwohl die Vereinigten Staaten wie auch viele Staaten des Commonwealth Erben des <hl> britischen Common Laws <hl> sind, setzt sich das amerikanische Recht bedeutend davon ab. Dies rührt größtenteils von dem langen Zeitraum her, [...] #### Expected output Von welchem Gesetzt stammt das Amerikanische ab? ## Model description This model is a fine-tuned version of [valhalla/t5-base-qg-hl](https://huggingface.co/valhalla/t5-base-qg-hl) on the [GermanQUAD](https://www.deepset.ai/germanquad) dataset. ## Training and evaluation data The training script can be accessed [here](https://github.com/d-e-h-i-o/german-qg). ### Evaluation The model achieves a BLEU-4 score of **11.30** on the GermanQuAD test set (n=2204). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 100 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
{"language": ["de"], "license": "mit", "tags": ["question generation"], "datasets": ["deepset/germanquad"], "widget": [{"text": "Obwohl die Vereinigten Staaten wie auch viele Staaten des Commonwealth Erben des <hl>britischen Common Laws<hl> sind, setzt sich das amerikanische Recht bedeutend davon ab."}], "model-index": [{"name": "german-qg-t5-quad", "results": []}]}
dehio/german-qg-t5-quad
null
[ "transformers", "pytorch", "t5", "text2text-generation", "question generation", "de", "dataset:deepset/germanquad", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
delibaelyas/deliba
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0602 - Precision: 0.9251 - Recall: 0.9370 - F1: 0.9310 - Accuracy: 0.9839 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2435 | 1.0 | 878 | 0.0685 | 0.9182 | 0.9221 | 0.9202 | 0.9816 | | 0.0515 | 2.0 | 1756 | 0.0602 | 0.9212 | 0.9368 | 0.9289 | 0.9834 | | 0.0301 | 3.0 | 2634 | 0.0602 | 0.9251 | 0.9370 | 0.9310 | 0.9839 | ### Framework versions - Transformers 4.11.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.925115970841617, "name": "Precision"}, {"type": "recall", "value": 0.9370175634858485, "name": "Recall"}, {"type": "f1", "value": 0.9310287333963209, "name": "F1"}, {"type": "accuracy", "value": 0.9839388692074285, "name": "Accuracy"}]}]}]}
delpart/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
#DialoGPT medium based model of Dwight Schrute, trained with 10 context lines of history for 20 epochs.
{"tags": ["conversational"]}
delvan/DialoGPT-medium-DwightV1
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
delvan/DialoGPT-small-DwightV2
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{"license": "afl-3.0"}
delviana/Delvi
null
[ "license:afl-3.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
This is finetune version of [SimCSE: Simple Contrastive Learning of Sentence Embeddings](https://arxiv.org/abs/2104.08821) , train unsupervised on 570K stroke sentences from : stroke books, quora medical, quora's stroke and human annotates. ### Extract sentence representation ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("demdecuong/stroke_simcse") model = AutoModel.from_pretrained("demdecuong/stroke_simcse") text = "What are disease related to red stroke's causes?" inputs = tokenizer(text, return_tensors='pt') outputs = model(**inputs)[1] ``` ### Build up embedding for database ``` database = [ 'What is the daily checklist for stroke returning home', 'What are some tips for stroke adapt new life', 'What should I consider when using nursing-home care' ] embedding = torch.zeros((len(database),768)) for i in range(len(database)): inputs = tokenizer(database[i], return_tensors="pt") outputs = model(**inputs)[1] embedding[i] = outputs print(embedding.shape) ``` ### Result On our Poc testset , which contains pairs of matching question related to stroke from human-generated. | Model | Top-1 Accuracy | | ------------- | ------------- | | SimCSE (supervised) | 75.83 | | SimCSE (ours) | 76.66 |
{}
demdecuong/stroke_simcse
null
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2104.08821", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
This is finetune version of [SimCSE: Simple Contrastive Learning of Sentence Embeddings](https://arxiv.org/abs/2104.08821) - Train supervised on 100K triplet samples samples related to stroke domain from : stroke books, quora medical, quora's stroke, quora's general and human annotates. - Positive sentences are generated by paraphrasing and back-translate. - Negative sentences are randomly selected in general domain. ### Extract sentence representation ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("demdecuong/stroke_sup_simcse") model = AutoModel.from_pretrained("demdecuong/stroke_sup_simcse") text = "What are disease related to red stroke's causes?" inputs = tokenizer(text, return_tensors='pt') outputs = model(**inputs)[1] ``` ### Build up embedding for database ``` database = [ 'What is the daily checklist for stroke returning home', 'What are some tips for stroke adapt new life', 'What should I consider when using nursing-home care' ] embedding = torch.zeros((len(database),768)) for i in range(len(database)): inputs = tokenizer(database[i], return_tensors="pt") outputs = model(**inputs)[1] embedding[i] = outputs print(embedding.shape) ``` ### Result On our company's PoC project, the testset contains positive/negative pairs of matching question related to stroke from human-generation. - SimCSE supervised + 100k : Train on 100K triplet samples contains : medical, stroke and general domain - SimCSE supervised + 42k : Train on 42K triplet samples contains : medical, stroke domain | Model | Top-1 Accuracy | | ------------- | ------------- | | SimCSE supervised (author) | 75.83 | | SimCSE unsupervised (ours) | 76.66 | | SimCSE supervised + 100k (ours) | 73.33 | | SimCSE supervised + 42k (ours) | 75.83 |
{}
demdecuong/stroke_sup_simcse
null
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2104.08821", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
denden/iloko-test
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # iloko_model This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0095 - Wer: 0.0840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.2784 | 1.11 | 100 | 2.9875 | 1.0 | | 2.6899 | 2.22 | 200 | 2.6741 | 1.0 | | 2.6177 | 3.33 | 300 | 2.6516 | 1.0 | | 2.5327 | 4.44 | 400 | 2.4530 | 1.0 | | 0.8653 | 5.56 | 500 | 0.5227 | 0.6547 | | 0.3414 | 6.67 | 600 | 0.1830 | 0.2487 | | 0.2299 | 7.78 | 700 | 0.1212 | 0.1877 | | 0.1739 | 8.89 | 800 | 0.0843 | 0.1441 | | 0.1242 | 10.0 | 900 | 0.0766 | 0.1441 | | 0.1116 | 11.11 | 1000 | 0.0530 | 0.1145 | | 0.0861 | 12.22 | 1100 | 0.0442 | 0.1047 | | 0.1007 | 13.33 | 1200 | 0.0379 | 0.1023 | | 0.0613 | 14.44 | 1300 | 0.0291 | 0.1006 | | 0.0629 | 15.56 | 1400 | 0.0264 | 0.0961 | | 0.047 | 16.67 | 1500 | 0.0238 | 0.0935 | | 0.0797 | 17.78 | 1600 | 0.0226 | 0.0913 | | 0.034 | 18.89 | 1700 | 0.0197 | 0.0893 | | 0.0485 | 20.0 | 1800 | 0.0173 | 0.0905 | | 0.0402 | 21.11 | 1900 | 0.0148 | 0.0902 | | 0.0231 | 22.22 | 2000 | 0.0135 | 0.0891 | | 0.0512 | 23.33 | 2100 | 0.0134 | 0.0861 | | 0.0181 | 24.44 | 2200 | 0.0118 | 0.0842 | | 0.0371 | 25.56 | 2300 | 0.0116 | 0.0867 | | 0.0342 | 26.67 | 2400 | 0.0104 | 0.0863 | | 0.0344 | 27.78 | 2500 | 0.0100 | 0.0850 | | 0.0182 | 28.89 | 2600 | 0.0096 | 0.0839 | | 0.0171 | 30.0 | 2700 | 0.0095 | 0.0840 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "pipeline_tag": "automatic-speech-recognition"}
denden/iloko_model
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
denden/iloko_model_new
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
FINETUNED ILOKANO SPEECH RECOGNITION FROM WAV2VEC-XLSR-S3
{"language": ["en"], "license": "afl-3.0", "tags": ["audio", "automatic-speech-recognition", "speech"], "datasets": ["timit_asr"], "metrics": ["wer"], "pipeline_tag": "automatic-speech-recognition"}
denden/new_iloko
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "en", "dataset:timit_asr", "license:afl-3.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
denden/new_iloko_model
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
denden047/t5-small-finetuned-xsum
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dendihandian/bank_marketing
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# BERT-Wiki-Paragraphs Authors: Satya Almasian\*, Dennis Aumiller\*, Lucienne-Sophie Marmé, Michael Gertz Contact us at `<lastname>@informatik.uni-heidelberg.de` Details for the training method can be found in our work [Structural Text Segmentation of Legal Documents](https://arxiv.org/abs/2012.03619). The training procedure follows the same setup, but we substitute legal documents for Wikipedia in this model. Find the associated training data here: [wiki-paragraphs](https://huggingface.co/datasets/dennlinger/wiki-paragraphs) Training is performed in a form of weakly-supervised fashion to determine whether paragraphs topically belong together or not. We utilize automatically generated samples from Wikipedia for training, where paragraphs from within the same section are assumed to be topically coherent. We use the same articles as ([Koshorek et al., 2018](https://arxiv.org/abs/1803.09337)), albeit from a 2021 dump of Wikpeida, and split at paragraph boundaries instead of the sentence level. ## Usage Preferred usage is through `transformers.pipeline`: ```python from transformers import pipeline pipe = pipeline("text-classification", model="dennlinger/bert-wiki-paragraphs") pipe("{First paragraph} [SEP] {Second paragraph}") ``` A predicted "1" means that paragraphs belong to the same topic, a "0" indicates a disconnect. ## Training Setup The model was trained for 3 epochs from `bert-base-uncased` on paragraph pairs (limited to 512 subwork with the `longest_first` truncation strategy). We use a batch size of 24 wit 2 iterations gradient accumulation (effective batch size of 48), and a learning rate of 1e-4, with gradient clipping at 5. Training was performed on a single Titan RTX GPU over the duration of 3 weeks.
{"language": ["en"], "license": "mit", "tags": ["sentence-similarity", "text-classification"], "datasets": ["dennlinger/wiki-paragraphs"], "metrics": ["f1"]}
dennlinger/bert-wiki-paragraphs
null
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "sentence-similarity", "en", "dataset:dennlinger/wiki-paragraphs", "arxiv:2012.03619", "arxiv:1803.09337", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# About this model: Topical Change Detection in Documents This network has been fine-tuned for the task described in the paper *Topical Change Detection in Documents via Embeddings of Long Sequences* and is our best-performing base-transformer model. You can find more detailed information in our GitHub page for the paper [here](https://github.com/dennlinger/TopicalChange), or read the [paper itself](https://arxiv.org/abs/2012.03619). The weights are based on RoBERTa-base. # Load the model The preferred way is through pipelines ```python from transformers import pipeline pipe = pipeline("text-classification", model="dennlinger/roberta-cls-consec") pipe("{First paragraph} [SEP] {Second paragraph}") ``` # Input Format The model expects two segments that are separated with the `[SEP]` token. In our training setup, we had entire paragraphs as samples (or up to 512 tokens across two paragraphs), specifically trained on a Terms of Service data set. Note that this might lead to poor performance on "general" topics, such as news articles or Wikipedia. # Training objective The training task is to determine whether two text segments (paragraphs) belong to the same topical section or not. This can be utilized to create a topical segmentation of a document by consecutively predicting the "coherence" of two segments. If you are experimenting via the Huggingface Model API, the following are interpretations of the `LABEL`s: * `LABEL_0`: Two input segments separated by `[SEP]` do *not* belong to the same topic. * `LABEL_1`: Two input segments separated by `[SEP]` do belong to the same topic. # Performance The results of this model can be found in the paper. We average over models from five different random seeds, which is why the specific results for this model might be different from the exact values in the paper. Note that this model is *not* trained to work on classifying single texts, but only works with two (separated) inputs.
{}
dennlinger/roberta-cls-consec
null
[ "transformers", "pytorch", "jax", "safetensors", "roberta", "text-classification", "arxiv:2012.03619", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
denohepo/v1
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
denpa92/bert-base-cantonese
null
[ "transformers", "pytorch", "jax", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
denritchie/tBERT-v1
null
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
deokisys/BCtest
null
[ "transformers", "pytorch", "jax", "bert", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# Bilingual English + German SQuAD2.0 We created German Squad 2.0 (**deQuAD 2.0**) and merged with [**SQuAD2.0**](https://rajpurkar.github.io/SQuAD-explorer/) into an English and German training data for question answering. The [**bert-base-multilingual-cased**](https://github.com/google-research/bert/blob/master/multilingual.md) is used to fine-tune bilingual QA downstream task. ## Details of deQuAD 2.0 [**SQuAD2.0**](https://rajpurkar.github.io/SQuAD-explorer/) was auto-translated into German. We hired professional editors to proofread the translated transcripts, correct mistakes and double check the answers to further polish the text and enhance annotation quality. The final German deQuAD dataset contains **130k** training and **11k** test samples. ## Overview - **Language model:** bert-base-multilingual-cased - **Language:** German, English - **Training data:** deQuAD2.0 + SQuAD2.0 training set - **Evaluation data:** SQuAD2.0 test set; deQuAD2.0 test set - **Infrastructure:** 8xV100 GPU - **Published**: July 9th, 2021 ## Evaluation on English SQuAD2.0 ``` HasAns_exact = 85.79622132253711 HasAns_f1 = 90.92004586077663 HasAns_total = 5928 NoAns_exact = 94.76871320437343 NoAns_f1 = 94.76871320437343 NoAns_total = 5945 exact = 90.28889076054915 f1 = 92.84713483219753 total = 11873 ``` ## Evaluation on German deQuAD2.0 ``` HasAns_exact = 63.80526406330638 HasAns_f1 = 72.47269140789888 HasAns_total = 5813 NoAns_exact = 82.0291893792861 NoAns_f1 = 82.0291893792861 NoAns_total = 5687 exact = 72.81739130434782 f1 = 77.19858740470603 total = 11500 ``` ## Use Model in Pipeline ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="deutsche-telekom/bert-multi-english-german-squad2", tokenizer="deutsche-telekom/bert-multi-english-german-squad2" ) contexts = ["Die Allianz Arena ist ein Fußballstadion im Norden von München und bietet bei Bundesligaspielen 75.021 Plätze, zusammengesetzt aus 57.343 Sitzplätzen, 13.794 Stehplätzen, 1.374 Logenplätzen, 2.152 Business Seats und 966 Sponsorenplätzen. In der Allianz Arena bestreitet der FC Bayern München seit der Saison 2005/06 seine Heimspiele. Bis zum Saisonende 2017 war die Allianz Arena auch Spielstätte des TSV 1860 München.", "Harvard is a large, highly residential research university. It operates several arts, cultural, and scientific museums, alongside the Harvard Library, which is the world's largest academic and private library system, comprising 79 individual libraries with over 18 million volumes. "] questions = ["Wo befindet sich die Allianz Arena?", "What is the worlds largest academic and private library system?"] qa_pipeline(context=contexts, question=questions) ``` # Output: ```json [{'score': 0.7290093898773193, 'start': 44, 'end': 62, 'answer': 'Norden von München'}, {'score': 0.7979822754859924, 'start': 134, 'end': 149, 'answer': 'Harvard Library'}] ``` ## License - The MIT License Copyright (c) 2021 Fang Xu, Deutsche Telekom AG
{"language": ["de", "en", "multilingual"], "license": "mit", "tags": ["english", "german"]}
deutsche-telekom/bert-multi-english-german-squad2
null
[ "transformers", "pytorch", "safetensors", "bert", "question-answering", "english", "german", "de", "en", "multilingual", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
We released the German Question Answering model fine-tuned with our own German Question Answering dataset (**deQuAD**) containing **130k** training and **11k** test QA pairs. ## Overview - **Language model:** [electra-base-german-uncased](https://huggingface.co/german-nlp-group/electra-base-german-uncased) - **Language:** German - **Training data:** deQuAD2.0 training set (~42MB) - **Evaluation data:** deQuAD2.0 test set (~4MB) - **Infrastructure:** 8xV100 GPU ## Evaluation We benchmarked the question answering performance on our deQuAD test data with some German language models. The fine-tuned electra-base-german-uncased model gives the best performance (Exact Match/F1). | Model | All | HasAns | NoAns | |-------|--------|--------|--------| | electra-base-german-uncased | 70.97/76.18 | 67.73/78.02 | 74.29/74.29 | | bert-base-german-cased |58.98/64.77| 49.19/60.63| 69.03/69.03| |bert-base-german-dbmdz-uncased|63.70/68.00| 57.03/65.52| 70.51/70.51 | |dbmdz/bert-base-german-europeana-uncased| 58.79/63.38| 52.14/61.22| 65.59/65.59| ## Use Model in Pipeline ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="deutsche-telekom/electra-base-de-squad2", tokenizer="deutsche-telekom/electra-base-de-squad2" ) contexts = ['''Die Robert Bosch GmbH ist ein im Jahr 1886 von Robert Bosch gegründetes multinationales deutsches Unternehmen. Es ist tätig als Automobilzulieferer, Hersteller von Gebrauchsgütern und Industrie- und Gebäudetechnik und darüber hinaus in der automatisierten Verpackungstechnik, wo Bosch den führenden Platz einnimmt. Die Robert Bosch GmbH und ihre rund 460 Tochter- und Regionalgesellschaften in mehr als 60 Ländern bilden die Bosch-Gruppe. Der Sitz der Geschäftsführung befindet sich auf der Schillerhöhe in Gerlingen, der Firmensitz in Stuttgart. Seit dem 1. Juli 2012 ist Volkmar Denner Vorsitzender der Geschäftsführung. Im Jahr 2015 konnte Bosch die Spitzenposition zurückgewinnen. Die Automobilsparte war im Jahr 2018 für 61 % des Konzernumsatzes von Bosch verantwortlich. Das Unternehmen hatte im Jahr 2018 in Deutschland an 85 Standorten 139.400 Mitarbeiter.''']*2 questions = ["Wer leitet die Robert Bosch GmbH?", "Wer begründete die Robert Bosch GmbH?"] qa_pipeline(context=contexts, question=questions) ``` ## Output ```json [{'score': 0.9537325501441956, 'start': 577, 'end': 591, 'answer': 'Volkmar Denner'}, {'score': 0.8804352879524231, 'start': 47, 'end': 59, 'answer': 'Robert Bosch'}] ``` ## License - The MIT License Copyright (c) 2021 Fang Xu, Deutsche Telekom AG
{"language": "de", "license": "mit", "tags": ["german"]}
deutsche-telekom/electra-base-de-squad2
null
[ "transformers", "pytorch", "safetensors", "electra", "question-answering", "german", "de", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
summarization
transformers
# mT5-small-sum-de-en-v1 This is a bilingual summarization model for English and German. It is based on the multilingual T5 model [google/mt5-small](https://huggingface.co/google/mt5-small). [![One Conversation](https://raw.githubusercontent.com/telekom/HPOflow/main/docs/source/imgs/1c-logo.png)](https://www.welove.ai/) This model is provided by the [One Conversation](https://www.welove.ai/) team of [Deutsche Telekom AG](https://www.telekom.com/). ## Training The training was conducted with the following hyperparameters: - base model: [google/mt5-small](https://huggingface.co/google/mt5-small) - source_prefix: `"summarize: "` - batch size: 3 - max_source_length: 800 - max_target_length: 96 - warmup_ratio: 0.3 - number of train epochs: 10 - gradient accumulation steps: 2 - learning rate: 5e-5 ## Datasets and Preprocessing The datasets were preprocessed as follows: The summary was tokenized with the [google/mt5-small](https://huggingface.co/google/mt5-small) tokenizer. Then only the records with no more than 94 summary tokens were selected. The MLSUM dataset has a special characteristic. In the text, the summary is often included completely as one or more sentences. These have been removed from the texts. The reason is that we do not want to train a model that ultimately extracts only sentences as a summary. This model is trained on the following datasets: | Name | Language | Size | License |------|----------|------|-------- | [CNN Daily - Train](https://github.com/abisee/cnn-dailymail) | en | 218,223 | The license is unclear. The data comes from CNN and Daily Mail. We assume that it may only be used for research purposes and not commercially. | [Extreme Summarization (XSum) - Train](https://github.com/EdinburghNLP/XSum) | en | 204,005 | The license is unclear. The data comes from BBC. We assume that it may only be used for research purposes and not commercially. | [wiki_lingua English](https://github.com/esdurmus/Wikilingua) | en | 130,331 | [Creative Commons CC BY-NC-SA 3.0 License](https://www.wikihow.com/wikiHow:Terms-of-Use) | [wiki_lingua German](https://github.com/esdurmus/Wikilingua) | de | 48,390 | [Creative Commons CC BY-NC-SA 3.0 License](https://www.wikihow.com/wikiHow:Terms-of-Use) | [MLSUM German - Train](https://github.com/ThomasScialom/MLSUM) | de | 218,043 | Usage of dataset is restricted to non-commercial research purposes only. Copyright belongs to the original copyright holders (see [here](https://github.com/ThomasScialom/MLSUM#mlsum)). | [SwissText 2019 - Train](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html) | de | 84,564 | The license is unclear. The data was published in the [German Text Summarization Challenge](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html). We assume that they may be used for research purposes and not commercially. | Language | Size |------|------ | German | 350,997 | English | 552,559 | Total | 903,556 ## Evaluation on MLSUM German Test Set (no beams) | Model | rouge1 | rouge2 | rougeL | rougeLsum |-------|--------|--------|--------|---------- | [ml6team/mt5-small-german-finetune-mlsum](https://huggingface.co/ml6team/mt5-small-german-finetune-mlsum) | 18.3607 | 5.3604 | 14.5456 | 16.1946 | **deutsche-telekom/mT5-small-sum-de-en-01 (this)** | **21.7336** | **7.2614** | **17.1323** | **19.3977** ## Evaluation on CNN Daily English Test Set (no beams) | Model | rouge1 | rouge2 | rougeL | rougeLsum |-------|--------|--------|--------|---------- | [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) | 26.7664 | 8.8243 | 18.3703 | 23.2614 | [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) | 28.5374 | 9.8565 | 19.4829 | 24.7364 | [mrm8488/t5-base-finetuned-summarize-news](https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news) | 37.576 | 14.7389 | 24.0254 | 34.4634 | **deutsche-telekom/mT5-small-sum-de-en-01 (this)** | **37.6339** | **16.5317** | **27.1418** | **34.9951** ## Evaluation on Extreme Summarization (XSum) English Test Set (no beams) | Model | rouge1 | rouge2 | rougeL | rougeLsum |-------|--------|--------|--------|---------- | [mrm8488/t5-base-finetuned-summarize-news](https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news) | 18.6204 | 3.535 | 12.3997 | 15.2111 | [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) | 28.5374 | 9.8565 | 19.4829 | 24.7364 | deutsche-telekom/mT5-small-sum-de-en-01 (this) | 32.3416 | 10.6191 | 25.3799 | 25.3908 | [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) | 44.2553 &clubs; | 21.4289 &clubs; | 36.2639 &clubs; | 36.2696 &clubs; &clubs;: These values seem to be unusually high. It could be that the test set was used in the training data. ## License Copyright (c) 2021 Philip May, Deutsche Telekom AG This work is licensed under the [Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0)](https://creativecommons.org/licenses/by-nc-sa/3.0/) license.
{"language": ["de", "en", "multilingual"], "license": "cc-by-nc-sa-4.0", "tags": ["summarization"], "datasets": ["cnn_dailymail", "xsum", "wiki_lingua", "mlsum", "swiss_text_2019"]}
deutsche-telekom/mt5-small-sum-de-en-v1
null
[ "transformers", "pytorch", "safetensors", "mt5", "text2text-generation", "summarization", "de", "en", "multilingual", "dataset:cnn_dailymail", "dataset:xsum", "dataset:wiki_lingua", "dataset:mlsum", "dataset:swiss_text_2019", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
summarization
transformers
# mT5-small-sum-de-mit-v1 This is a German summarization model. It is based on the multilingual T5 model [google/mt5-small](https://huggingface.co/google/mt5-small). The special characteristic of this model is that, unlike many other models, it is licensed under a permissive open source license (MIT). Among other things, this license allows commercial use. [![One Conversation](https://raw.githubusercontent.com/telekom/HPOflow/main/docs/source/imgs/1c-logo.png)](https://www.welove.ai/) This model is provided by the [One Conversation](https://www.welove.ai/) team of [Deutsche Telekom AG](https://www.telekom.com/). ## Training The training was conducted with the following hyperparameters: - base model: [google/mt5-small](https://huggingface.co/google/mt5-small) - source_prefix: `"summarize: "` - batch size: 3 (6) - max_source_length: 800 - max_target_length: 96 - warmup_ratio: 0.3 - number of train epochs: 10 - gradient accumulation steps: 2 - learning rate: 5e-5 ## Datasets and Preprocessing The datasets were preprocessed as follows: The summary was tokenized with the [google/mt5-small](https://huggingface.co/google/mt5-small) tokenizer. Then only the records with no more than 94 summary tokens were selected. This model is trained on the following dataset: | Name | Language | Size | License |------|----------|------|-------- | [SwissText 2019 - Train](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html) | de | 84,564 | Concrete license is unclear. The data was published in the [German Text Summarization Challenge](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html). We have permission to use the Swisstext dataset and release the resulting summarization model under MIT license (see [permission-declaration-swisstext.pdf](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-mit-v1/resolve/main/permission-declaration-swisstext.pdf)). ## Evaluation on MLSUM German Test Set (no beams) | Model | rouge1 | rouge2 | rougeL | rougeLsum |-------|--------|--------|--------|---------- | deutsche-telekom/mt5-small-sum-de-mit-v1 (this) | 16.8023 | 3.5531 | 12.6884 | 14.7624 | [ml6team/mt5-small-german-finetune-mlsum](https://huggingface.co/ml6team/mt5-small-german-finetune-mlsum) | 18.3607 | 5.3604 | 14.5456 | 16.1946 | **[deutsche-telekom/mt5-small-sum-de-en-01](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-en-v1)** | **21.7336** | **7.2614** | **17.1323** | **19.3977** ## License Copyright (c) 2021 Philip May, Deutsche Telekom AG Licensed under the MIT License (the "License"); you may not use this work except in compliance with the License. You may obtain a copy of the License by reviewing the file [LICENSE](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-mit-v1/blob/main/LICENSE) in the repository.
{"language": ["de"], "license": "mit", "tags": ["summarization"], "datasets": ["swiss_text_2019"]}
deutsche-telekom/mt5-small-sum-de-mit-v1
null
[ "transformers", "pytorch", "safetensors", "mt5", "text2text-generation", "summarization", "de", "dataset:swiss_text_2019", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dev/pix2pix_CAD
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{"license": "other"}
dev114/ai-generated-blog-content
null
[ "license:other", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-NER-finetuned-ner This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the x_glue dataset. It achieves the following results on the evaluation set: - Loss: 1.4380 - Precision: 0.2274 - Recall: 0.1119 - F1: 0.1499 - Accuracy: 0.8485 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0822 | 1.0 | 878 | 1.1648 | 0.2068 | 0.1101 | 0.1437 | 0.8471 | | 0.0102 | 2.0 | 1756 | 1.2697 | 0.2073 | 0.1110 | 0.1445 | 0.8447 | | 0.0049 | 3.0 | 2634 | 1.3945 | 0.2006 | 0.1073 | 0.1399 | 0.8368 | | 0.0025 | 4.0 | 3512 | 1.3994 | 0.2243 | 0.1126 | 0.1499 | 0.8501 | | 0.0011 | 5.0 | 4390 | 1.4380 | 0.2274 | 0.1119 | 0.1499 | 0.8485 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["x_glue"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-NER-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "x_glue", "type": "x_glue", "args": "ner"}, "metrics": [{"type": "precision", "value": 0.2273838630806846, "name": "Precision"}, {"type": "recall", "value": 0.11185727172496743, "name": "Recall"}, {"type": "f1", "value": 0.14994961370507223, "name": "F1"}, {"type": "accuracy", "value": 0.8485324947589099, "name": "Accuracy"}]}]}]}
deval/bert-base-NER-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:x_glue", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the x_glue dataset. It achieves the following results on the evaluation set: - Loss: 2.7979 - Precision: 0.0919 - Recall: 0.1249 - F1: 0.1059 - Accuracy: 0.4927 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1773 | 1.0 | 878 | 1.7953 | 0.1025 | 0.1352 | 0.1166 | 0.5058 | | 0.0397 | 2.0 | 1756 | 2.0827 | 0.0906 | 0.1230 | 0.1043 | 0.4888 | | 0.022 | 3.0 | 2634 | 2.8677 | 0.0864 | 0.1260 | 0.1025 | 0.4098 | | 0.0126 | 4.0 | 3512 | 2.8584 | 0.0848 | 0.1201 | 0.0994 | 0.4424 | | 0.0085 | 5.0 | 4390 | 2.7979 | 0.0919 | 0.1249 | 0.1059 | 0.4927 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["x_glue"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "x_glue", "type": "x_glue", "args": "ner"}, "metrics": [{"type": "precision", "value": 0.09187560910782316, "name": "Precision"}, {"type": "recall", "value": 0.1248795761078998, "name": "Recall"}, {"type": "f1", "value": 0.10586493798172632, "name": "F1"}, {"type": "accuracy", "value": 0.492660102891609, "name": "Accuracy"}]}]}]}
deval/bert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:x_glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0606 - Precision: 0.9277 - Recall: 0.9385 - F1: 0.9330 - Accuracy: 0.9844 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2454 | 1.0 | 878 | 0.0692 | 0.9106 | 0.9212 | 0.9159 | 0.9809 | | 0.0517 | 2.0 | 1756 | 0.0616 | 0.9203 | 0.9352 | 0.9277 | 0.9834 | | 0.0314 | 3.0 | 2634 | 0.0606 | 0.9277 | 0.9385 | 0.9330 | 0.9844 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9276788676324229, "name": "Precision"}, {"type": "recall", "value": 0.9384718648618414, "name": "Recall"}, {"type": "f1", "value": 0.9330441552663775, "name": "F1"}, {"type": "accuracy", "value": 0.9843836878643939, "name": "Accuracy"}]}]}]}
deval/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
{}
devansvd/bert-model-test-2
null
[ "transformers", "pytorch", "jax", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
# Fintuned Wav2Vec of Timit - 4001 checkpoint
{}
devin132/w2v-timit-ft-4001
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
devkushal75/medtextclassifier
null
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
devleejh/amsSummary
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
devparikh142003/DialoGPT-small-harrypotter
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
devtest12/bullet-points-tokenizer
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
devtest12/f-sample-test-points-tokenizer
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
devtrent/distilbert-base-uncased-finetuned-imdb-accelerate
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
devtrent/distilbert-base-uncased-finetuned-imdb
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# Dummy Model This be a dummmmmy
{}
devtrent/dummy-model
null
[ "transformers", "pytorch", "camembert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
DistilBERT model trained on OSCAR nepali corpus from huggingface datasets. We trained the DitilBERT language model on OSCAR nepali corpus and then for downstream sentiment analysis task. The dataset we used for sentiment analysis was first extracted from twitter filtering for devenagari text then labelled it as postive,negative and neutral. However, since neutral labels exceeded the positive and negative tweets we decided to use only positive and negative tweets for ease of training. LABEL_1 = negative LABEL_0 = positive
{}
dexhrestha/Nepali-DistilBERT
null
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dextter/dex-model
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
#Aerith GPT model
{"tags": ["conversational"]}
df4rfrrf/DialoGPT-medium-Aerith
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dfernandez/56678
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dfgvhxfgv/dfghtfghjbgh
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dgspai/paradox
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
This the repo for the final project
{}
dhairya2303/bert-base-uncased-emotion-AD
null
[ "transformers", "tf", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{'sadness':0,'joy':1,'love':2,'anger':3,'fear':4,'surprise':5}
{}
dhairya2303/bert-base-uncased-emotion_holler
null
[ "transformers", "tf", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
dhairya2303/finetuned-bert-mrpc
null
[ "transformers", "tensorboard", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-finetuned-funsd-test This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1000 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1 - Datasets 1.18.0 - Tokenizers 0.11.0
{"license": "cc-by-nc-sa-4.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "layoutlmv2-finetuned-funsd-test", "results": []}]}
dhanesh123in/layoutlmv2-finetuned-funsd-test
null
[ "transformers", "pytorch", "tensorboard", "layoutlmv2", "token-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dhanuja/marian-finetuned-kde4-en-to-fr
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dhanuja/opus-mt-en-fr-finetuned-en-to-con
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dhanuja/opus-mt-en-ro-finetuned-en-to-con
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dhanuja/opus-mt-en-ro-finetuned-en-to-ro
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# AMy San
{"tags": ["conversational"]}
dhanushlnaik/amySan
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dharmesh8b/indian-accent-english-asr
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{"license": "afl-3.0"}
dheeraja486/Abusive-classifier-indic-languages
null
[ "license:afl-3.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
"hello"
{}
dhikri/question_answering_glue
null
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
dhimskyy/wiki-bert
null
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
dhlpricing/MyGPT2TG-cased-v1
null
[ "transformers", "pytorch", "tf", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dhong/losad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dhong/test
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# DistilBert Dummy Sentiment Model ## Purpose This is a dummy model that can be used for testing the transformers `pipeline` with the task `sentiment-analysis`. It should always give random results (i.e. `{"label": "negative", "score": 0.5}`). ## How to use ```python classifier = pipeline("sentiment-analysis", "dhpollack/distilbert-dummy-sentiment") results = classifier(["this is a test", "another test"]) ``` ## Notes This was created as follows: 1. Create a vocab.txt file (in /tmp/vocab.txt in this example). ``` [UNK] [SEP] [PAD] [CLS] [MASK] ``` 2. Open a python shell: ```python import transformers config = transformers.DistilBertConfig(vocab_size=5, n_layers=1, n_heads=1, dim=1, hidden_dim=4 * 1, num_labels=2, id2label={0: "negative", 1: "positive"}, label2id={"negative": 0, "positive": 1}) model = transformers.DistilBertForSequenceClassification(config) tokenizer = transformers.DistilBertTokenizer("/tmp/vocab.txt", model_max_length=512) config.save_pretrained(".") model.save_pretrained(".") tokenizer.save_pretrained(".") ```
{"language": ["multilingual", "en"], "tags": ["sentiment-analysis", "testing", "unit tests"]}
dhpollack/distilbert-dummy-sentiment
null
[ "transformers", "pytorch", "distilbert", "text-classification", "sentiment-analysis", "testing", "unit tests", "multilingual", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
dhruvgangwani/multilingual-toxic-classification
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
{}
dhtocks/Named-Entity-Recognition
null
[ "transformers", "pytorch", "roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
dhtocks/Topic-Classification
null
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
### TUNiB-Electra Stereotype Detector Finetuned TUNiB-Electra base with K-StereoSet. Original Code: https://github.com/newfull5/Stereotype-Detector
{}
dhtocks/tunib-electra-stereotype-classifier
null
[ "transformers", "pytorch", "electra", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
di6ora/BERT
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
Language Model 2 For Language agnostic Dense Passage Retrieval
{}
diarsabri/LaDPR-context-encoder
null
[ "transformers", "pytorch", "dpr", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
Language Model 1 For Language agnostic Dense Passage Retrieval
{}
diarsabri/LaDPR-query-encoder
null
[ "transformers", "pytorch", "dpr", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
diazuto/Dd
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
diazuto/Diaz
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53 --- language: gl datasets: - OpenSLR 77 metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Galician Wav2Vec2-Large-XLSR-53 results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: OpenSLR type: openslr args: gl metrics: - name: Test WER type: wer value: 16.79 --- Wav2Vec2-Large-XLSR-53-galician Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on galician using the [OpenSLR](https://huggingface.co/datasets/common_voice) dataset When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "gl", split="test[:2%]") # This is not available yet, load OpenSLR or your dataset instead processor = Wav2Vec2Processor.from_pretrained("diego-fustes/wav2vec2-large-xlsr-gl") model = Wav2Vec2ForCTC.from_pretrained("diego-fustes/wav2vec2-large-xlsr-gl") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Galician test data of Common Voice (when it is released). ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "gl", split="test") # This is not available yet, load OpenSLR or your dataset instead wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("diego-fustes/wav2vec2-large-xlsr-gl") model = Wav2Vec2ForCTC.from_pretrained("diego-fustes/wav2vec2-large-xlsr-gl") model.to("cuda") chars_to_ignore_regex = '[^a-záéíóúñ ]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 16.79 % on OpenSLR split ## Training The OpenSLR [SLR77](https://openslr.org/77/) dataset was used for training and validation. The dataset was split as 70% for training, 15% for validation and 15% for testing The script used for training can be found [here](https://github.com/diego-fustes/xlsr-fine-tuning-gl)
{}
diego-fustes/wav2vec2-large-xlsr-gl
null
[ "transformers", "pytorch", "jax", "safetensors", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
diego51/bert-base-uncased-finetuned-swag
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
diegoAgher/w2v
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
diegor2/t5-small-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1 This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["wmt16_en_ro_pre_processed"], "model-index": [{"name": "t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1", "results": []}]}
diegor2/t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetu-truncated-d22eed
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wmt16_en_ro_pre_processed", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
diegor2/t5-tiny-random-length-96-learning_rate-0.0001-weight_decay-0.01-finetu-truncated-5e15da
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.005-finetuned-en-to-ro-TRAIN_EPOCHS-1 This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset. It achieves the following results on the evaluation set: - Loss: 6.4897 - Bleu: 0.0002 - Gen Len: 9.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 6.2585 | 1.0 | 76290 | 6.4897 | 0.0002 | 9.0 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["wmt16_en_ro_pre_processed"], "metrics": ["bleu"], "model-index": [{"name": "t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.005-finetuned-en-to-ro-TRAIN_EPOCHS-1", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16_en_ro_pre_processed", "type": "wmt16_en_ro_pre_processed", "args": "enro"}, "metrics": [{"type": "bleu", "value": 0.0002, "name": "Bleu"}]}]}]}
diegor2/t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.005-finetu-truncated-41f800
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wmt16_en_ro_pre_processed", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1 This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["wmt16_en_ro_pre_processed"], "model-index": [{"name": "t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1", "results": []}]}
diegor2/t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wmt16_en_ro_pre_processed", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
diegor2/translation-en-pt-t5-length-128-learning_rate-2e-05-weight_decay-0.01-truncated-1ab6b0
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
diegozs97/chemprot-seed-0-0k
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
diegozs97/chemprot-seed-0-1000k
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
diegozs97/chemprot-seed-0-100k
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
diegozs97/chemprot-seed-0-1500k
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
diegozs97/chemprot-seed-0-1800k
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
diegozs97/chemprot-seed-0-2000k
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
diegozs97/chemprot-seed-0-200k
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00