modelId
stringlengths
4
81
tags
sequence
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
AnonymousSub/SR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-mushrooms results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mushrooms This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4432 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.734 | 1.0 | 157 | 2.5275 | | 2.5807 | 2.0 | 314 | 2.4169 | | 2.5122 | 3.0 | 471 | 2.4352 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
AnonymousSub/SR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2022-03-04T17:42:49Z
--- license: cc-by-4.0 language: mr datasets: - L3Cube-MahaCorpus --- ## MahaAlBERT MahaAlBERT is a Marathi AlBERT model trained on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets. [dataset link] (https://github.com/l3cube-pune/MarathiNLP) More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2202.01159) ``` @InProceedings{joshi:2022:WILDRE6, author = {Joshi, Raviraj}, title = {L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT Language Models, and Resources}, booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference}, month = {June}, year = {2022}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {97--101} } ```
AnonymousSub/SR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2022-03-04T18:29:40Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xlsr-53-Total_2e-4_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53-Total_2e-4_2 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2733 - Wer: 0.2116 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 5.2741 | 0.1 | 200 | 2.9070 | 0.9707 | | 2.034 | 0.2 | 400 | 0.7240 | 0.6798 | | 1.0037 | 0.3 | 600 | 0.5651 | 0.5368 | | 0.8834 | 0.4 | 800 | 0.4709 | 0.4669 | | 0.7973 | 0.5 | 1000 | 0.4305 | 0.4261 | | 0.7489 | 0.6 | 1200 | 0.4017 | 0.3763 | | 0.7507 | 0.7 | 1400 | 0.3662 | 0.3481 | | 0.7108 | 0.8 | 1600 | 0.3604 | 0.3513 | | 0.7151 | 0.9 | 1800 | 0.3563 | 0.3406 | | 0.6755 | 1.0 | 2000 | 0.3365 | 0.3210 | | 0.6038 | 1.1 | 2200 | 0.3394 | 0.3053 | | 0.6109 | 1.2 | 2400 | 0.3179 | 0.2844 | | 0.5999 | 1.3 | 2600 | 0.3166 | 0.2773 | | 0.6291 | 1.4 | 2800 | 0.3134 | 0.2733 | | 0.626 | 1.5 | 3000 | 0.3060 | 0.2690 | | 0.6188 | 1.6 | 3200 | 0.3038 | 0.2644 | | 0.5757 | 1.7 | 3400 | 0.3015 | 0.2566 | | 0.5943 | 1.8 | 3600 | 0.2925 | 0.2494 | | 0.6043 | 1.9 | 3800 | 0.2858 | 0.2491 | | 0.5874 | 2.0 | 4000 | 0.2874 | 0.2452 | | 0.5263 | 2.1 | 4200 | 0.2800 | 0.2364 | | 0.5282 | 2.2 | 4400 | 0.2848 | 0.2387 | | 0.4953 | 2.3 | 4600 | 0.2793 | 0.2360 | | 0.5428 | 2.4 | 4800 | 0.2863 | 0.2414 | | 0.5618 | 2.5 | 5000 | 0.2788 | 0.2350 | | 0.5395 | 2.6 | 5200 | 0.2765 | 0.2325 | | 0.5178 | 2.7 | 5400 | 0.2787 | 0.2351 | | 0.5264 | 2.8 | 5600 | 0.2755 | 0.2312 | | 0.5222 | 2.9 | 5800 | 0.2692 | 0.2258 | | 0.5184 | 3.0 | 6000 | 0.2681 | 0.2242 | | 0.4826 | 3.1 | 6200 | 0.2736 | 0.2224 | | 0.479 | 3.2 | 6400 | 0.2896 | 0.2353 | | 0.4938 | 3.3 | 6600 | 0.2744 | 0.2252 | | 0.4772 | 3.4 | 6800 | 0.2735 | 0.2242 | | 0.4831 | 3.5 | 7000 | 0.2721 | 0.2225 | | 0.4869 | 3.6 | 7200 | 0.2710 | 0.2194 | | 0.4515 | 3.7 | 7400 | 0.2692 | 0.2196 | | 0.4732 | 3.8 | 7600 | 0.2729 | 0.2269 | | 0.4683 | 3.9 | 7800 | 0.2713 | 0.2211 | | 0.4674 | 4.0 | 8000 | 0.2642 | 0.2116 | | 0.4239 | 4.1 | 8200 | 0.2773 | 0.2176 | | 0.4306 | 4.2 | 8400 | 0.2779 | 0.2191 | | 0.441 | 4.3 | 8600 | 0.2758 | 0.2136 | | 0.4343 | 4.4 | 8800 | 0.2797 | 0.2203 | | 0.4059 | 4.5 | 9000 | 0.2763 | 0.2159 | | 0.4399 | 4.6 | 9200 | 0.2755 | 0.2123 | | 0.4131 | 4.7 | 9400 | 0.2741 | 0.2124 | | 0.4331 | 4.8 | 9600 | 0.2728 | 0.2101 | | 0.4288 | 4.9 | 9800 | 0.2730 | 0.2110 | | 0.4341 | 5.0 | 10000 | 0.2733 | 0.2116 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
AnonymousSub/SR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2022-03-04T18:31:45Z
--- language: - hi - en - multilingual license: cc-by-4.0 tags: - hi - en - codemix datasets: - L3Cube-HingCorpus --- ## HingBERT HingBERT is a Hindi-English code-mixed BERT model trained on roman text. It is a base BERT model fine-tuned on L3Cube-HingCorpus. <br> [dataset link] (https://github.com/l3cube-pune/code-mixed-nlp) More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398) ``` @inproceedings{nayak-joshi-2022-l3cube, title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models", author = "Nayak, Ravindra and Joshi, Raviraj", booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.wildre-1.2", pages = "7--12", } ```
AnonymousSub/SR_rule_based_roberta_twostagetriplet_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2022-03-04T18:45:10Z
--- language: - hi - en - multilingual license: cc-by-4.0 tags: - hi - en - codemix datasets: - L3Cube-HingCorpus --- ## HingMBERT HingBERT is a Hindi-English code-mixed BERT model trained on roman text. It is a mBERT model fine-tuned on L3Cube-HingCorpus. <br> [dataset link] (https://github.com/l3cube-pune/code-mixed-nlp) More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398) ``` @inproceedings{nayak-joshi-2022-l3cube, title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models", author = "Nayak, Ravindra and Joshi, Raviraj", booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.wildre-1.2", pages = "7--12", } ```
AnonymousSub/SR_rule_based_twostagetriplet_hier_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2022-03-04T19:18:16Z
Arabic Model AraBertMo_base_V10 --- language: ar tags: Fill-Mask datasets: OSCAR widget: - text: " السلام عليكم ورحمة[MASK] وبركاتة" - text: " اهلا وسهلا بكم في [MASK] من سيربح المليون" - text: " مرحبا بك عزيزي الزائر [MASK] موقعنا " --- # Arabic BERT Model **AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERTMo_base uses the same BERT-Base config. AraBERTMo_base now comes in 10 new variants All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name. Checkpoints are available in PyTorch formats. ## Pretraining Corpus `AraBertMo_base_V10' model was pre-trained on ~3 million words: - [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar". ## Training results this model achieves the following results: | Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss| |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:| | Fill-Mask| 30024| 10 | 64 | 4700 | 9h 13m 43s | 7.2395 | ## Load Pretrained Model You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V10") model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V10") ``` ## This model was built for master's degree research in an organization: - [University of kufa](https://uokufa.edu.iq/). - [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/). - **Department of Computer Science**
AnonymousSub/cline-papers-roberta-0.585
[ "pytorch", "roberta", "transformers" ]
null
{ "architectures": [ "LecbertForPreTraining" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9245 - name: F1 type: f1 value: 0.9246284188099615 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2183 - Accuracy: 0.9245 - F1: 0.9246 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8174 | 1.0 | 250 | 0.3166 | 0.905 | 0.9023 | | 0.2534 | 2.0 | 500 | 0.2183 | 0.9245 | 0.9246 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2+cpu - Datasets 1.16.1 - Tokenizers 0.10.1
AnonymousSub/declutr-s10-AR
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xlsr-53-Total2e-4_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53-Total2e-4_3 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2893 - Wer: 0.1863 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 5.16 | 0.1 | 200 | 2.9123 | 0.9707 | | 2.4599 | 0.2 | 400 | 0.8145 | 0.6906 | | 1.0523 | 0.3 | 600 | 0.5247 | 0.4823 | | 0.8965 | 0.4 | 800 | 0.4391 | 0.4416 | | 0.7994 | 0.5 | 1000 | 0.3889 | 0.3773 | | 0.7491 | 0.6 | 1200 | 0.3604 | 0.3305 | | 0.7425 | 0.7 | 1400 | 0.3543 | 0.3277 | | 0.7253 | 0.8 | 1600 | 0.3397 | 0.3143 | | 0.7221 | 0.9 | 1800 | 0.3341 | 0.2979 | | 0.6853 | 1.0 | 2000 | 0.3244 | 0.2906 | | 0.6107 | 1.1 | 2200 | 0.3127 | 0.2771 | | 0.6233 | 1.2 | 2400 | 0.3116 | 0.2721 | | 0.6214 | 1.3 | 2600 | 0.3256 | 0.2671 | | 0.6511 | 1.4 | 2800 | 0.3019 | 0.2570 | | 0.6491 | 1.5 | 3000 | 0.2961 | 0.2576 | | 0.6411 | 1.6 | 3200 | 0.2963 | 0.2535 | | 0.5963 | 1.7 | 3400 | 0.2939 | 0.2526 | | 0.6146 | 1.8 | 3600 | 0.2908 | 0.2490 | | 0.6291 | 1.9 | 3800 | 0.2851 | 0.2448 | | 0.6154 | 2.0 | 4000 | 0.2861 | 0.2424 | | 0.5652 | 2.1 | 4200 | 0.2852 | 0.2411 | | 0.5648 | 2.2 | 4400 | 0.2856 | 0.2350 | | 0.5365 | 2.3 | 4600 | 0.2802 | 0.2395 | | 0.5855 | 2.4 | 4800 | 0.2883 | 0.2374 | | 0.5978 | 2.5 | 5000 | 0.2855 | 0.2364 | | 0.5863 | 2.6 | 5200 | 0.2736 | 0.2277 | | 0.5569 | 2.7 | 5400 | 0.2746 | 0.2293 | | 0.5628 | 2.8 | 5600 | 0.2719 | 0.2249 | | 0.5655 | 2.9 | 5800 | 0.2653 | 0.2224 | | 0.5578 | 3.0 | 6000 | 0.2685 | 0.2243 | | 0.5303 | 3.1 | 6200 | 0.2696 | 0.2204 | | 0.5316 | 3.2 | 6400 | 0.2733 | 0.2247 | | 0.5476 | 3.3 | 6600 | 0.2716 | 0.2203 | | 0.5326 | 3.4 | 6800 | 0.2697 | 0.2209 | | 0.5375 | 3.5 | 7000 | 0.2701 | 0.2197 | | 0.5364 | 3.6 | 7200 | 0.2655 | 0.2165 | | 0.503 | 3.7 | 7400 | 0.2650 | 0.2125 | | 0.5284 | 3.8 | 7600 | 0.2672 | 0.2162 | | 0.5251 | 3.9 | 7800 | 0.2669 | 0.2172 | | 0.5299 | 4.0 | 8000 | 0.2632 | 0.2081 | | 0.4904 | 4.1 | 8200 | 0.2674 | 0.2099 | | 0.496 | 4.2 | 8400 | 0.2700 | 0.2143 | | 0.5067 | 4.3 | 8600 | 0.2648 | 0.2090 | | 0.506 | 4.4 | 8800 | 0.2595 | 0.2069 | | 0.4795 | 4.5 | 9000 | 0.2653 | 0.2072 | | 0.5149 | 4.6 | 9200 | 0.2618 | 0.2073 | | 0.4786 | 4.7 | 9400 | 0.2632 | 0.2058 | | 0.5056 | 4.8 | 9600 | 0.2674 | 0.2123 | | 0.5059 | 4.9 | 9800 | 0.2642 | 0.2115 | | 0.5119 | 5.0 | 10000 | 0.2672 | 0.2089 | | 0.4619 | 5.1 | 10200 | 0.2658 | 0.2062 | | 0.4647 | 5.2 | 10400 | 0.2664 | 0.2025 | | 0.4707 | 5.3 | 10600 | 0.2656 | 0.2084 | | 0.486 | 5.4 | 10800 | 0.2728 | 0.2029 | | 0.4785 | 5.5 | 11000 | 0.2653 | 0.2004 | | 0.4895 | 5.6 | 11200 | 0.2835 | 0.2119 | | 0.4519 | 5.7 | 11400 | 0.2715 | 0.2061 | | 0.484 | 5.8 | 11600 | 0.2663 | 0.2071 | | 0.4734 | 5.9 | 11800 | 0.2615 | 0.2023 | | 0.4563 | 6.0 | 12000 | 0.2604 | 0.1997 | | 0.4193 | 6.1 | 12200 | 0.2708 | 0.2015 | | 0.4516 | 6.2 | 12400 | 0.2724 | 0.2018 | | 0.4609 | 6.3 | 12600 | 0.2745 | 0.2004 | | 0.43 | 6.4 | 12800 | 0.2716 | 0.1979 | | 0.4424 | 6.5 | 13000 | 0.2674 | 0.1963 | | 0.4589 | 6.6 | 13200 | 0.2622 | 0.1977 | | 0.4458 | 6.7 | 13400 | 0.2668 | 0.1994 | | 0.4233 | 6.8 | 13600 | 0.2739 | 0.1978 | | 0.4557 | 6.9 | 13800 | 0.2692 | 0.1972 | | 0.4472 | 7.0 | 14000 | 0.2686 | 0.1942 | | 0.4193 | 7.1 | 14200 | 0.2843 | 0.1959 | | 0.4033 | 7.2 | 14400 | 0.2767 | 0.1945 | | 0.4266 | 7.3 | 14600 | 0.2808 | 0.1931 | | 0.419 | 7.4 | 14800 | 0.2801 | 0.1945 | | 0.4352 | 7.5 | 15000 | 0.2764 | 0.1934 | | 0.4248 | 7.6 | 15200 | 0.2818 | 0.1938 | | 0.4001 | 7.7 | 15400 | 0.2754 | 0.1931 | | 0.415 | 7.8 | 15600 | 0.2799 | 0.1916 | | 0.4056 | 7.9 | 15800 | 0.2746 | 0.1916 | | 0.419 | 8.0 | 16000 | 0.2789 | 0.1909 | | 0.3974 | 8.1 | 16200 | 0.2913 | 0.1897 | | 0.3999 | 8.2 | 16400 | 0.2894 | 0.1899 | | 0.4179 | 8.3 | 16600 | 0.2819 | 0.1918 | | 0.4081 | 8.4 | 16800 | 0.2868 | 0.1910 | | 0.3963 | 8.5 | 17000 | 0.2835 | 0.1889 | | 0.3748 | 8.6 | 17200 | 0.2841 | 0.1903 | | 0.375 | 8.7 | 17400 | 0.2820 | 0.1874 | | 0.3857 | 8.8 | 17600 | 0.2865 | 0.1872 | | 0.3901 | 8.9 | 17800 | 0.2824 | 0.1882 | | 0.4067 | 9.0 | 18000 | 0.2838 | 0.1887 | | 0.3711 | 9.1 | 18200 | 0.2892 | 0.1897 | | 0.3661 | 9.2 | 18400 | 0.2889 | 0.1883 | | 0.3796 | 9.3 | 18600 | 0.2876 | 0.1886 | | 0.3932 | 9.4 | 18800 | 0.2948 | 0.1877 | | 0.3894 | 9.5 | 19000 | 0.2896 | 0.1884 | | 0.3643 | 9.6 | 19200 | 0.2897 | 0.1868 | | 0.384 | 9.7 | 19400 | 0.2887 | 0.1867 | | 0.3951 | 9.8 | 19600 | 0.2905 | 0.1862 | | 0.3595 | 9.9 | 19800 | 0.2893 | 0.1866 | | 0.3758 | 10.0 | 20000 | 0.2893 | 0.1863 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
AnonymousSub/declutr-s10-SR
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
36
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.862669465085938 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1374 - F1: 0.8627 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2596 | 1.0 | 525 | 0.1571 | 0.8302 | | 0.1292 | 2.0 | 1050 | 0.1416 | 0.8455 | | 0.0809 | 3.0 | 1575 | 0.1374 | 0.8627 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
AnonymousSub/declutr-techqa
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - conversational --- # Rick DialogGPT Model
AnonymousSub/dummy_1
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
--- license: apache-2.0 tags: - vision - image-segmentation - generated_from_trainer model-index: - name: segformer-b0-finetuned-segments-sidewalk results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-finetuned-segments-sidewalk This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset. It achieves the following results on the evaluation set: - Loss: 0.5679 - Miou: 0.2769 - Macc: 0.3331 - Overall Accuracy: 0.8424 - Per Category Iou: [nan, 0.7174911859423314, 0.8790751054409742, 0.6065232798410057, 0.6975274018055722, 0.3486407385349508, nan, 0.40093167116703843, 0.28779837903852556, 0.0, 0.7870339041746186, 0.0, 0.0, 0.0, 0.0, 0.1464360606454247, 0.0, 0.0, 0.6770283275082656, 0.0, 0.338555175257431, 0.14697310016578427, 0.0, nan, 0.0, 0.27163002251763635, 0.0, 0.0, 0.8257437911843676, 0.7169333376341568, 0.9108105550493353, 0.0, 0.0, 0.1016801552778885, 0.0] - Per Category Accuracy: [nan, 0.9199960254104915, 0.9327745517652714, 0.7304629327758765, 0.7378309547498484, 0.45295941407150275, nan, 0.5188608021128075, 0.5327441812670195, 0.0, 0.9353764765979435, 0.0, 0.0, 0.0, 0.0, 0.1588525415198792, 0.0, 0.0, 0.9238854794385364, 0.0, 0.4400394213522207, 0.15130051149615126, 0.0, nan, 0.0, 0.3570096986572905, 0.0, 0.0, 0.9359897980968498, 0.8570458108260572, 0.9549583230619891, 0.0, 0.0, 0.11786971668879294, 0.0] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Miou | Macc | Overall Accuracy | Per Category Iou | Per Category Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:----------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | 1.357 | 1.0 | 400 | 1.0006 | 0.1632 | 0.2069 | 0.7524 | [nan, 0.5642795884663824, 0.7491853309192827, 0.0, 0.40589649630192104, 0.02723606910696284, nan, 0.0002207740938439576, 0.0, 0.0, 0.6632462867093903, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.5671699281129761, 0.0, 0.0009207911027492868, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.7507253434892517, 0.6157793573905029, 0.8774768871968204, 0.0, 0.0, 0.0, 0.0] | [nan, 0.6839993330882016, 0.9786792586618772, 0.0, 0.4818162160949784, 0.02785198456498826, nan, 0.00022133459131411787, 0.0, 0.0, 0.9043689536433023, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8606078323791991, 0.0, 0.0009210330367246509, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.895198618615298, 0.8549807032886052, 0.9328734839751688, 0.0, 0.0, 0.0, 0.0] | | 1.6346 | 2.0 | 800 | 0.7856 | 0.1903 | 0.2334 | 0.7917 | [nan, 0.6276046255936906, 0.8379492348238635, 0.0, 0.5220035981992285, 0.19441920935217594, nan, 0.16135703555333, 0.0, 0.0, 0.7357165628674137, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.567598980063164, 0.0, 0.07867871139133086, 0.0, 0.0, nan, 0.0, 0.02123705398363847, 0.0, 0.0, 0.7917172051343153, 0.6589515948064048, 0.8916684207946344, 0.0, 0.0, 0.00013685918191589503, 0.0] | [nan, 0.8610263337355926, 0.9499345560017969, 0.0, 0.5908796687797819, 0.2144081438468206, nan, 0.1813236746419022, 0.0, 0.0, 0.8825551027577866, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9239907140298015, 0.0, 0.08495225520298297, 0.0, 0.0, nan, 0.0, 0.021302829364985724, 0.0, 0.0, 0.9258397010509258, 0.8834861376443207, 0.9489131468773239, 0.0, 0.0, 0.0001372777815910495, 0.0] | | 0.659 | 3.0 | 1200 | 0.6798 | 0.2215 | 0.2687 | 0.8107 | [nan, 0.6728474586764454, 0.8404607924530816, 0.21147709475332813, 0.5407350347311378, 0.23535489130104167, nan, 0.3087159264982809, 0.0060319580742948155, 0.0, 0.7331305064022374, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6378031991744924, 0.0, 0.35289337122777764, 6.24997656258789e-05, 0.0, nan, 0.0, 0.14698390926256938, 0.0, 0.0, 0.8019042204623998, 0.669283249725758, 0.8928145424856038, 0.0, 0.0, 0.03847722460691187, 0.0] | [nan, 0.866012011452706, 0.9627112260298595, 0.21236715482371135, 0.5645869262075475, 0.2750610095322395, nan, 0.3857655597748765, 0.0060319580742948155, 0.0, 0.939196440844118, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8380282443529743, 0.0, 0.5749902063170915, 6.256068386334744e-05, 0.0, nan, 0.0, 0.1605725590139305, 0.0, 0.0, 0.9212803460870584, 0.8870298583701837, 0.959700359744241, 0.0, 0.0, 0.04453994364914478, 0.0] | | 0.5481 | 4.0 | 1600 | 0.5999 | 0.2522 | 0.2998 | 0.8312 | [nan, 0.7078353465279917, 0.8661728761172196, 0.3857324719136883, 0.6338278880825696, 0.3440050078187208, nan, 0.35980405625532347, 0.23875867241702606, 0.0, 0.773703347865372, 0.0, 0.0, 0.0, 0.0, 0.0004931363471679884, 0.0, 0.0, 0.6554146448850521, 0.0, 0.367673493717809, 0.03089804641909161, 0.0, nan, 0.0, 0.21529017459808872, 0.0, 0.0, 0.818951849158376, 0.7007504838794707, 0.9053929635423027, 0.0, 0.0, 0.06626212301200333, 0.0] | [nan, 0.8955207784307155, 0.9536263694097721, 0.39712577675621036, 0.6989299616008556, 0.4248959179453637, nan, 0.42984959564233455, 0.26168627652468784, 0.0, 0.9055166364779607, 0.0, 0.0, 0.0, 0.0, 0.0004932058379466533, 0.0, 0.0, 0.8632164276000204, 0.0, 0.6365580872107307, 0.031401709658368616, 0.0, nan, 0.0, 0.2497286263775161, 0.0, 0.0, 0.9296676429517725, 0.8858954297713482, 0.9555756265860916, 0.0, 0.0, 0.0750792276952902, 0.0] | | 0.7855 | 5.0 | 2000 | 0.5679 | 0.2769 | 0.3331 | 0.8424 | [nan, 0.7174911859423314, 0.8790751054409742, 0.6065232798410057, 0.6975274018055722, 0.3486407385349508, nan, 0.40093167116703843, 0.28779837903852556, 0.0, 0.7870339041746186, 0.0, 0.0, 0.0, 0.0, 0.1464360606454247, 0.0, 0.0, 0.6770283275082656, 0.0, 0.338555175257431, 0.14697310016578427, 0.0, nan, 0.0, 0.27163002251763635, 0.0, 0.0, 0.8257437911843676, 0.7169333376341568, 0.9108105550493353, 0.0, 0.0, 0.1016801552778885, 0.0] | [nan, 0.9199960254104915, 0.9327745517652714, 0.7304629327758765, 0.7378309547498484, 0.45295941407150275, nan, 0.5188608021128075, 0.5327441812670195, 0.0, 0.9353764765979435, 0.0, 0.0, 0.0, 0.0, 0.1588525415198792, 0.0, 0.0, 0.9238854794385364, 0.0, 0.4400394213522207, 0.15130051149615126, 0.0, nan, 0.0, 0.3570096986572905, 0.0, 0.0, 0.9359897980968498, 0.8570458108260572, 0.9549583230619891, 0.0, 0.0, 0.11786971668879294, 0.0] | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
AnonymousSub/dummy_2
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
39
null
--- license: apache-2.0 tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: mt5-small-finetuned-amazon-en-zh_TW results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-amazon-en-zh_TW This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2408 - Rouge1: 15.8831 - Rouge2: 7.1676 - Rougel: 15.5523 - Rougelsum: 15.4954 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 7.5388 | 1.0 | 838 | 3.5888 | 12.6081 | 5.3611 | 12.3495 | 12.2926 | | 4.0043 | 2.0 | 1676 | 3.4038 | 13.8517 | 6.3417 | 13.4755 | 13.4913 | | 3.6776 | 3.0 | 2514 | 3.3294 | 15.1519 | 7.3842 | 14.8844 | 14.8458 | | 3.4929 | 4.0 | 3352 | 3.2668 | 15.6067 | 7.4016 | 15.3715 | 15.2908 | | 3.387 | 5.0 | 4190 | 3.2855 | 15.0546 | 7.3065 | 14.8271 | 14.7755 | | 3.302 | 6.0 | 5028 | 3.2457 | 15.0213 | 6.6597 | 14.6131 | 14.5641 | | 3.2806 | 7.0 | 5866 | 3.2408 | 15.8831 | 7.1676 | 15.5523 | 15.4954 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
AnonymousSub/hier_triplet_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - conversational --- # General DialogGPT Model
AnonymousSub/rule_based_bert_mean_diff_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: mit language: de --- # german-financial-statements-bert This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) using German financial statements. It achieves the following results on the evaluation set: - Loss: 1.2025 - Accuracy: 0.7376 - Perplexity: 3.3285 ## Model description Annual financial statements in Germany are published in the Federal Gazette and are freely accessible. The documents describe the entrepreneurial and in particular the financial situation of a company with reference to a reporting period. The german-financial-statements-bert model aims to provide a BERT model specifically for this domain. ## Training and evaluation data The training was performed with 100,000 natural language sentences from annual financial statements. 50,000 of these sentences were taken unfiltered and randomly from 5,500 different financial statement documents, and another 50,000 were also taken randomly from 5,500 different financial statement documents, but this half was filtered so that only sentences referring to a financial entity were selected. Specifically, this means that the second half of the sentences contains an indicator for a reference to a financial entity (EUR, Euro, TEUR, €, T€). The evaluation was carried out with 20,000 sentences of the same origin and distribution. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: my-wav2vec2-base-timit-demo-colab-my results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-wav2vec2-base-timit-demo-colab-my This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5569 - Wer: 0.3481 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4083 | 4.0 | 500 | 1.0932 | 0.7510 | | 0.5536 | 8.0 | 1000 | 0.4965 | 0.4819 | | 0.2242 | 12.0 | 1500 | 0.4779 | 0.4077 | | 0.1249 | 16.0 | 2000 | 0.4921 | 0.4006 | | 0.0844 | 20.0 | 2500 | 0.4809 | 0.3753 | | 0.0613 | 24.0 | 3000 | 0.5307 | 0.3680 | | 0.0459 | 28.0 | 3500 | 0.5569 | 0.3481 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: swbd-5percent-supervised results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swbd-5percent-supervised This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6970 - Wer: 0.1352 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 6.8534 | 0.64 | 1000 | 2.9535 | 1.0 | | 1.8605 | 1.28 | 2000 | 0.7878 | 0.3719 | | 0.9862 | 1.92 | 3000 | 0.5906 | 0.2684 | | 0.8405 | 2.56 | 4000 | 0.5555 | 0.2151 | | 0.6972 | 3.2 | 5000 | 0.5905 | 0.1992 | | 0.6033 | 3.84 | 6000 | 0.4867 | 0.1781 | | 0.5393 | 4.48 | 7000 | 0.5447 | 0.1805 | | 0.529 | 5.12 | 8000 | 0.5398 | 0.1746 | | 0.5072 | 5.77 | 9000 | 0.5093 | 0.1706 | | 0.4331 | 6.41 | 10000 | 0.4990 | 0.1627 | | 0.4837 | 7.05 | 11000 | 0.5319 | 0.1634 | | 0.3867 | 7.69 | 12000 | 0.4866 | 0.1595 | | 0.345 | 8.33 | 13000 | 0.5202 | 0.1582 | | 0.372 | 8.97 | 14000 | 0.5396 | 0.1547 | | 0.355 | 9.61 | 15000 | 0.5992 | 0.1493 | | 0.3258 | 10.25 | 16000 | 0.5247 | 0.1527 | | 0.3327 | 10.89 | 17000 | 0.5664 | 0.1512 | | 0.3422 | 11.53 | 18000 | 0.5819 | 0.1456 | | 0.2815 | 12.17 | 19000 | 0.5692 | 0.1453 | | 0.2719 | 12.81 | 20000 | 0.5012 | 0.1476 | | 0.2838 | 13.45 | 21000 | 0.5286 | 0.1454 | | 0.2418 | 14.09 | 22000 | 0.6238 | 0.1486 | | 0.2412 | 14.73 | 23000 | 0.5889 | 0.1456 | | 0.2227 | 15.37 | 24000 | 0.5901 | 0.1459 | | 0.2129 | 16.02 | 25000 | 0.5959 | 0.1454 | | 0.2071 | 16.66 | 26000 | 0.6259 | 0.1427 | | 0.2185 | 17.3 | 27000 | 0.6581 | 0.1437 | | 0.1982 | 17.94 | 28000 | 0.6194 | 0.1411 | | 0.1928 | 18.58 | 29000 | 0.5940 | 0.1409 | | 0.1885 | 19.22 | 30000 | 0.6733 | 0.1417 | | 0.1835 | 19.86 | 31000 | 0.6363 | 0.1393 | | 0.1756 | 20.5 | 32000 | 0.6675 | 0.1382 | | 0.1776 | 21.14 | 33000 | 0.6147 | 0.1407 | | 0.1758 | 21.78 | 34000 | 0.6405 | 0.1420 | | 0.1645 | 22.42 | 35000 | 0.6999 | 0.1401 | | 0.1631 | 23.06 | 36000 | 0.6224 | 0.1385 | | 0.1494 | 23.7 | 37000 | 0.6639 | 0.1374 | | 0.1472 | 24.34 | 38000 | 0.6471 | 0.1373 | | 0.1514 | 24.98 | 39000 | 0.6570 | 0.1395 | | 0.1527 | 25.62 | 40000 | 0.6876 | 0.1375 | | 0.1514 | 26.27 | 41000 | 0.6835 | 0.1376 | | 0.1344 | 26.91 | 42000 | 0.6987 | 0.1372 | | 0.1267 | 27.55 | 43000 | 0.7026 | 0.1362 | | 0.1384 | 28.19 | 44000 | 0.7021 | 0.1366 | | 0.1264 | 28.83 | 45000 | 0.7016 | 0.1355 | | 0.1227 | 29.47 | 46000 | 0.6970 | 0.1352 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.2 - Datasets 1.18.2 - Tokenizers 0.10.3
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
--- license: mit tags: - generated_from_trainer model-index: - name: pump_intent_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pump_intent_test This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. ## Model description Custom data generated labeling text according to these three categories. These three categories are the subcategories of Pump - essentially when a user asks a question and expects an answer in response - Value: a slot value or a calculation - Clarification: Asking for further information on a previous answer - Testing: Testing for knowledge of facts and definitions Takes a user input of string text and classifies it according to one of three categories. ## Intended uses & limitations from transformers import pipeline classifier = pipeline("text-classification",model="mp6kv/pump_intent_test") output = classifier("What is the value of the length of the blue object?") score = output[0]['score'] label = output[0]['label'] ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1_wikiqa_copy
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- tags: - generated_from_trainer datasets: - universal_dependencies metrics: - precision - recall - f1 - accuracy inference: false model-index: - name: distil-slovakbert-upos results: - task: name: Token Classification type: token-classification dataset: name: universal_dependencies sk_snk type: universal_dependencies args: sk_snk metrics: - name: Precision type: precision value: 0.9771104035797263 - name: Recall type: recall value: 0.9785418821096173 - name: F1 type: f1 value: 0.9778256189451022 - name: Accuracy type: accuracy value: 0.9800851200513933 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distil-slovakbert-upos This model is a fine-tuned version of [crabz/distil-slovakbert](https://huggingface.co/crabz/distil-slovakbert) on the universal_dependencies sk_snk dataset. It achieves the following results on the evaluation set: - Loss: 0.1207 - Precision: 0.9771 - Recall: 0.9785 - F1: 0.9778 - Accuracy: 0.9801 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 266 | 0.2168 | 0.9570 | 0.9554 | 0.9562 | 0.9610 | | 0.3935 | 2.0 | 532 | 0.1416 | 0.9723 | 0.9736 | 0.9730 | 0.9740 | | 0.3935 | 3.0 | 798 | 0.1236 | 0.9722 | 0.9735 | 0.9728 | 0.9747 | | 0.0664 | 4.0 | 1064 | 0.1195 | 0.9722 | 0.9741 | 0.9732 | 0.9766 | | 0.0664 | 5.0 | 1330 | 0.1160 | 0.9764 | 0.9772 | 0.9768 | 0.9789 | | 0.0377 | 6.0 | 1596 | 0.1194 | 0.9763 | 0.9776 | 0.9770 | 0.9790 | | 0.0377 | 7.0 | 1862 | 0.1188 | 0.9740 | 0.9755 | 0.9748 | 0.9777 | | 0.024 | 8.0 | 2128 | 0.1188 | 0.9762 | 0.9777 | 0.9769 | 0.9793 | | 0.024 | 9.0 | 2394 | 0.1207 | 0.9774 | 0.9789 | 0.9781 | 0.9802 | | 0.0184 | 10.0 | 2660 | 0.1207 | 0.9771 | 0.9785 | 0.9778 | 0.9801 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0 - Datasets 1.16.1 - Tokenizers 0.11.0
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4513 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 79 | 2.5347 | | 2.6681 | 2.0 | 158 | 2.4416 | | 2.6681 | 3.0 | 237 | 2.4634 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.6
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.8955 - name: F1 type: f1 value: 0.8918003951340884 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.3662 - Accuracy: 0.8955 - F1: 0.8918 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 125 | 0.5675 | 0.8265 | 0.8067 | | 0.7565 | 2.0 | 250 | 0.3662 | 0.8955 | 0.8918 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1_wikiqa
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- tags: - conversational --- 0 Tony Stark DialoGPT Model
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1_wikiqa_copy
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xlsr-53-Total2e-4_4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53-Total2e-4_4 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2474 - Wer: 0.1951 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 5.5015 | 0.1 | 200 | 2.9261 | 0.9707 | | 2.9197 | 0.2 | 400 | 2.7757 | 0.9707 | | 1.7594 | 0.3 | 600 | 0.6117 | 0.5746 | | 1.0908 | 0.4 | 800 | 0.4673 | 0.4530 | | 0.9441 | 0.5 | 1000 | 0.4142 | 0.4010 | | 0.8688 | 0.6 | 1200 | 0.3909 | 0.3675 | | 0.849 | 0.7 | 1400 | 0.3649 | 0.3360 | | 0.8223 | 0.8 | 1600 | 0.3532 | 0.3334 | | 0.821 | 0.9 | 1800 | 0.3513 | 0.3185 | | 0.7839 | 1.0 | 2000 | 0.3373 | 0.3039 | | 0.714 | 1.1 | 2200 | 0.3210 | 0.2922 | | 0.7129 | 1.2 | 2400 | 0.3216 | 0.2860 | | 0.7076 | 1.3 | 2600 | 0.3279 | 0.2843 | | 0.73 | 1.4 | 2800 | 0.3111 | 0.2662 | | 0.7256 | 1.5 | 3000 | 0.3032 | 0.2625 | | 0.72 | 1.6 | 3200 | 0.3066 | 0.2571 | | 0.6754 | 1.7 | 3400 | 0.2999 | 0.2581 | | 0.6859 | 1.8 | 3600 | 0.2935 | 0.2562 | | 0.6966 | 1.9 | 3800 | 0.2858 | 0.2469 | | 0.6791 | 2.0 | 4000 | 0.2857 | 0.2393 | | 0.6412 | 2.1 | 4200 | 0.2815 | 0.2392 | | 0.6356 | 2.2 | 4400 | 0.2836 | 0.2343 | | 0.6048 | 2.3 | 4600 | 0.2824 | 0.2422 | | 0.6473 | 2.4 | 4800 | 0.2805 | 0.2316 | | 0.659 | 2.5 | 5000 | 0.2775 | 0.2262 | | 0.6412 | 2.6 | 5200 | 0.2729 | 0.2249 | | 0.6167 | 2.7 | 5400 | 0.2719 | 0.2227 | | 0.6226 | 2.8 | 5600 | 0.2661 | 0.2193 | | 0.6168 | 2.9 | 5800 | 0.2615 | 0.2172 | | 0.6145 | 3.0 | 6000 | 0.2608 | 0.2148 | | 0.593 | 3.1 | 6200 | 0.2643 | 0.2123 | | 0.5919 | 3.2 | 6400 | 0.2617 | 0.2131 | | 0.6115 | 3.3 | 6600 | 0.2589 | 0.2114 | | 0.5859 | 3.4 | 6800 | 0.2591 | 0.2100 | | 0.5919 | 3.5 | 7000 | 0.2564 | 0.2103 | | 0.5873 | 3.6 | 7200 | 0.2572 | 0.2074 | | 0.561 | 3.7 | 7400 | 0.2561 | 0.2056 | | 0.5808 | 3.8 | 7600 | 0.2538 | 0.2062 | | 0.5701 | 3.9 | 7800 | 0.2517 | 0.2029 | | 0.5722 | 4.0 | 8000 | 0.2523 | 0.2007 | | 0.5508 | 4.1 | 8200 | 0.2570 | 0.2023 | | 0.5591 | 4.2 | 8400 | 0.2502 | 0.2029 | | 0.5697 | 4.3 | 8600 | 0.2478 | 0.1991 | | 0.5689 | 4.4 | 8800 | 0.2492 | 0.2021 | | 0.5345 | 4.5 | 9000 | 0.2498 | 0.2005 | | 0.5726 | 4.6 | 9200 | 0.2492 | 0.1983 | | 0.5382 | 4.7 | 9400 | 0.2487 | 0.1974 | | 0.5614 | 4.8 | 9600 | 0.2481 | 0.1957 | | 0.5568 | 4.9 | 9800 | 0.2477 | 0.1955 | | 0.5631 | 5.0 | 10000 | 0.2474 | 0.1951 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
AnonymousSub/rule_based_roberta_hier_quadruplet_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - generated_from_trainer datasets: - wikiann inference: false model-index: - name: distil-slovakbert-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distil-slovakbert-ner This model is a fine-tuned version of [crabz/distil-slovakbert](https://huggingface.co/crabz/distil-slovakbert) on the wikiann sk dataset. - F1: 0.9307 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0+cu113 - Datasets 1.15.1 - Tokenizers 0.11.0
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - generated_from_trainer datasets: - cnn_dailymail model-index: - name: roberta_ernie_summarization_cnn_dailymail results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta_ernie_summarization_cnn_dailymail This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
AnonymousSub/rule_based_roberta_only_classfn_epochs_1_shard_1_wikiqa
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: tmplujkwod0 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tmplujkwod0 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5292 - Train Accuracy: 0.875 - Validation Loss: 0.5870 - Validation Accuracy: 0.5 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6565 | 0.625 | 0.7534 | 0.5 | 0 | | 0.5292 | 0.875 | 0.5870 | 0.5 | 1 | ### Framework versions - Transformers 4.17.0 - TensorFlow 2.8.0 - Tokenizers 0.11.6
AnonymousSub/rule_based_roberta_only_classfn_twostage_epochs_1_shard_1_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.86 - name: F1 type: f1 value: 0.8556701030927835 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.5523 - Accuracy: 0.86 - F1: 0.8557 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
AnonymousSub/rule_based_roberta_only_classfn_twostage_epochs_1_shard_1_wikiqa
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
24
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - pub_med_summarization_dataset metrics: - rouge model-index: - name: distilbart-cnn-12-6-finetuned-pubmed results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: pub_med_summarization_dataset type: pub_med_summarization_dataset args: document metrics: - name: Rouge1 type: rouge value: 40.0985 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbart-cnn-12-6-finetuned-pubmed This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the pub_med_summarization_dataset dataset. It achieves the following results on the evaluation set: - Loss: 1.9895 - Rouge1: 40.0985 - Rouge2: 16.5016 - Rougel: 24.8319 - Rougelsum: 36.0775 - Gen Len: 141.884 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 2.1709 | 1.0 | 4000 | 2.0257 | 38.1012 | 15.112 | 23.4064 | 33.9373 | 141.9195 | | 1.9495 | 2.0 | 8000 | 1.9593 | 39.529 | 16.1693 | 24.487 | 35.5238 | 141.9785 | | 1.756 | 3.0 | 12000 | 1.9488 | 39.9623 | 16.5799 | 24.949 | 35.9194 | 141.8855 | | 1.6032 | 4.0 | 16000 | 1.9732 | 39.672 | 16.1994 | 24.5996 | 35.7021 | 141.921 | | 1.4817 | 5.0 | 20000 | 1.9895 | 40.0985 | 16.5016 | 24.8319 | 36.0775 | 141.884 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_1_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer, util query = "What is the large instrument the man is playing?" docs = ["A man is playing a large flute.", "A man is playing a flute."] #Load the model model = SentenceTransformer('clu-ling/roberta-finetuned-stsbenchmark') #Encode query and documents query_emb = model.encode(query) doc_emb = model.encode(docs) #Compute dot score between query and all document embeddings scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist() #Combine docs & scores doc_score_pairs = list(zip(docs, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores for doc, score in doc_score_pairs: print(score, doc) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 125 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': True}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
This model provides a GPT-2 language model trained with SimCTG on the WritingPrompts benchmark [(Fan et al., 2018)](https://arxiv.org/abs/1805.04833) based on our paper [_A Contrastive Framework for Neural Text Generation_](https://arxiv.org/abs/2202.06417). We provide a detailed tutorial on how to apply SimCTG and Contrastive Search in our [project repo](https://github.com/yxuansu/SimCTG#4-huggingface-style-tutorials-back-to-top). In the following, we illustrate a brief tutorial on how to use our approach to perform text generation. ## 1. Installation of SimCTG: ```yaml pip install simctg --upgrade ``` ## 2. Initialize SimCTG Model: ```python import torch # load SimCTG language model from simctg.simctggpt import SimCTGGPT model_name = r'cambridgeltl/simctg_writingprompts' model = SimCTGGPT(model_name) model.eval() tokenizer = model.tokenizer ``` ## 3. Prepare the Text Prefix: ```python prefix_text = r"[ WP ] A kid doodling in a math class accidentally creates the world 's first functional magic circle in centuries . <|endoftext|>" print ('Prefix is: {}'.format(prefix_text)) tokens = tokenizer.tokenize(prefix_text) input_ids = tokenizer.convert_tokens_to_ids(tokens) input_ids = torch.LongTensor(input_ids).view(1,-1) ``` ## 4. Generate Text with Contrastive Search: ```python beam_width, alpha, decoding_len = 5, 0.6, 200 output = model.fast_contrastive_search(input_ids=input_ids, beam_width=beam_width, alpha=alpha, decoding_len=decoding_len) print("Output:\n" + 100 * '-') print(tokenizer.decode(output)) ''' Prefix is: [ WP ] A kid doodling in a math class accidentally creates the world 's first functional magic circle in centuries . <|endoftext|> Output: ---------------------------------------------------------------------------------------------------- [ WP ] A kid doodling in a math class accidentally creates the world's first functional magic circle in centuries. <|endoftext|> I looked at the circle, it wasn't there. I couldn't see it, and my eyes were watering from the rain that had fallen over the school, the wind howling through the windows and making a wispy noise as it passed through the air. `` What is it? '' I asked, trying to find the source of the noise. `` It's a circle, '' the teacher said in a voice that sounded like it was from an old TV show or something like that. `` You can't make it out of there. '' I looked around the room, there was no one there. It was as if I was in a dream, but no one seemed to notice me. Then I saw a flash of light, and the circle appeared in front of me. I turned around to see what was going on, I had never seen anything like it before in my life. I ran up to the teacher and asked, `` Are you sure this is real? ''' ``` For more details of our work, please refer to our main [project repo](https://github.com/yxuansu/SimCTG). ## 5. Citation: If you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks! ```bibtex @article{su2022contrastive, title={A Contrastive Framework for Neural Text Generation}, author={Su, Yixuan and Lan, Tian and Wang, Yan and Yogatama, Dani and Kong, Lingpeng and Collier, Nigel}, journal={arXiv preprint arXiv:2202.06417}, year={2022} } ```
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1_wikiqa
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
23
null
--- tags: - generated_from_trainer model-index: - name: AmharicCacoPostag results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AmharicCacoPostag This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
AnonymousSub/rule_based_twostage_quadruplet_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- tags: - generated_from_trainer model-index: - name: AmharicWICPostag results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AmharicWICPostag This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
AnonymousSub/rule_based_twostagequadruplet_hier_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- license: cc-by-nc-sa-4.0 datasets: - katanaml/cord tags: - generated_from_trainer model-index: - name: layoutlmv2-finetuned-cord results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-finetuned-cord This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on CORD dataset. ## Model description Model implementation code [Sparrow](https://github.com/katanaml/sparrow) ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
AnonymousSub/rule_based_twostagequadruplet_hier_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- tags: - generated_from_trainer model-index: - name: AmharicWICPostag10Tags results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AmharicWICPostag10Tags This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
AnonymousSub/rule_based_twostagetriplet_hier_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: rare-puppers results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.644444465637207 --- # rare-puppers Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### dog drinking water ![dog drinking water](images/dog_drinking_water.jpg) #### dog eating food ![dog eating food](images/dog_eating_food.jpg) #### dog playing toy ![dog playing toy](images/dog_playing_toy.jpg) #### dog sleeping ![dog sleeping](images/dog_sleeping.jpg)
AnonymousSub/specter-bert-model_copy
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - billfrench/autonlp-data-cyberlandr-ai-4 co2_eq_emissions: 1.6912535041856878 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 614417501 - CO2 Emissions (in grams): 1.6912535041856878 ## Validation Metrics - Loss: 1.305419921875 - Accuracy: 0.5 - Macro F1: 0.3333333333333333 - Micro F1: 0.5 - Weighted F1: 0.4444444444444444 - Macro Precision: 0.375 - Micro Precision: 0.5 - Weighted Precision: 0.5 - Macro Recall: 0.375 - Micro Recall: 0.5 - Weighted Recall: 0.5 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/billfrench/autonlp-cyberlandr-ai-4-614417501 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("billfrench/autonlp-cyberlandr-ai-4-614417501", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("billfrench/autonlp-cyberlandr-ai-4-614417501", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
AnonymousSub/specter-bert-model_copy_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - billfrench/autonlp-data-cyberlandr-ai-4 co2_eq_emissions: 1.131603488976132 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 614417500 - CO2 Emissions (in grams): 1.131603488976132 ## Validation Metrics - Loss: 1.4588216543197632 - Accuracy: 0.3333333333333333 - Macro F1: 0.225 - Micro F1: 0.3333333333333333 - Weighted F1: 0.2333333333333333 - Macro Precision: 0.1875 - Micro Precision: 0.3333333333333333 - Weighted Precision: 0.20833333333333334 - Macro Recall: 0.375 - Micro Recall: 0.3333333333333333 - Weighted Recall: 0.3333333333333333 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/billfrench/autonlp-cyberlandr-ai-4-614417500 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("billfrench/autonlp-cyberlandr-ai-4-614417500", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("billfrench/autonlp-cyberlandr-ai-4-614417500", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
AnonymousSub/unsup-consert-base_copy
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - generated_from_trainer model-index: - name: librispeech-semi-supervised-without-LM results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # librispeech-semi-supervised-without-LM This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1837 - Wer: 0.0580 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.0565 | 0.56 | 1000 | 0.1354 | 0.0641 | | 0.0548 | 1.12 | 2000 | 0.1320 | 0.0628 | | 0.0478 | 1.68 | 3000 | 0.1247 | 0.0612 | | 0.0451 | 2.24 | 4000 | 0.1256 | 0.0613 | | 0.0401 | 2.8 | 5000 | 0.1269 | 0.0606 | | 0.035 | 3.36 | 6000 | 0.1370 | 0.0595 | | 0.0344 | 3.92 | 7000 | 0.1280 | 0.0589 | | 0.031 | 4.48 | 8000 | 0.1350 | 0.0589 | | 0.031 | 5.04 | 9000 | 0.1418 | 0.0614 | | 0.0278 | 5.61 | 10000 | 0.1382 | 0.0604 | | 0.0272 | 6.17 | 11000 | 0.1502 | 0.0615 | | 0.0246 | 6.73 | 12000 | 0.1443 | 0.0609 | | 0.0233 | 7.29 | 13000 | 0.1548 | 0.0589 | | 0.0224 | 7.85 | 14000 | 0.1547 | 0.0599 | | 0.0202 | 8.41 | 15000 | 0.1570 | 0.0590 | | 0.0199 | 8.97 | 16000 | 0.1564 | 0.0594 | | 0.0186 | 9.53 | 17000 | 0.1598 | 0.0595 | | 0.0187 | 10.09 | 18000 | 0.1657 | 0.0585 | | 0.017 | 10.65 | 19000 | 0.1690 | 0.0584 | | 0.016 | 11.21 | 20000 | 0.1689 | 0.0588 | | 0.0156 | 11.77 | 21000 | 0.1745 | 0.0585 | | 0.0151 | 12.33 | 22000 | 0.1777 | 0.0583 | | 0.0144 | 12.89 | 23000 | 0.1778 | 0.0590 | | 0.0142 | 13.45 | 24000 | 0.1803 | 0.0585 | | 0.0137 | 14.01 | 25000 | 0.1796 | 0.0581 | | 0.0132 | 14.57 | 26000 | 0.1837 | 0.0580 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.2 - Datasets 1.18.2 - Tokenizers 0.10.3
AragornII/DialoGPT-small-harrypotter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - abhishek/autonlp-data-swahili-sentiment co2_eq_emissions: 1.9057858628956459 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 615517563 - CO2 Emissions (in grams): 1.9057858628956459 ## Validation Metrics - Loss: 0.6990908980369568 - Accuracy: 0.695364238410596 - Macro F1: 0.6088819062581828 - Micro F1: 0.695364238410596 - Weighted F1: 0.677326207350606 - Macro Precision: 0.6945099492363175 - Micro Precision: 0.695364238410596 - Weighted Precision: 0.6938596845881614 - Macro Recall: 0.5738408020723632 - Micro Recall: 0.695364238410596 - Weighted Recall: 0.695364238410596 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-swahili-sentiment-615517563 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-swahili-sentiment-615517563", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-swahili-sentiment-615517563", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
ArjunKadya/HuggingFace
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - dit inference: false --- # Document Image Transformer (base-sized model) Document Image Transformer (DiT) model pre-trained on IIT-CDIP (Lewis et al., 2006), a dataset that includes 42 million document images. It was introduced in the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Li et al. and first released in [this repository](https://github.com/microsoft/unilm/tree/master/dit). Note that DiT is identical to the architecture of [BEiT](https://huggingface.co/docs/transformers/model_doc/beit). Disclaimer: The team releasing DiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Document Image Transformer (DiT) is a transformer encoder model (BERT-like) pre-trained on a large collection of images in a self-supervised fashion. The pre-training objective for the model is to predict visual tokens from the encoder of a discrete VAE (dVAE), based on masked patches. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled document images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. ## Intended uses & limitations You can use the raw model for encoding document images into a vector space, but it's mostly meant to be fine-tuned on tasks like document image classification, table detection or document layout analysis. See the [model hub](https://huggingface.co/models?search=microsoft/dit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import BeitImageProcessor, BeitForMaskedImageModeling import torch from PIL import Image image = Image.open('path_to_your_document_image').convert('RGB') processor = BeitImageProcessor.from_pretrained("microsoft/dit-base") model = BeitForMaskedImageModeling.from_pretrained("microsoft/dit-base") num_patches = (model.config.image_size // model.config.patch_size) ** 2 pixel_values = processor(images=image, return_tensors="pt").pixel_values # create random boolean mask of shape (batch_size, num_patches) bool_masked_pos = torch.randint(low=0, high=2, size=(1, num_patches)).bool() outputs = model(pixel_values, bool_masked_pos=bool_masked_pos) loss, logits = outputs.loss, outputs.logits ``` ### BibTeX entry and citation info ```bibtex @article{Lewis2006BuildingAT, title={Building a test collection for complex document information processing}, author={David D. Lewis and Gady Agam and Shlomo Engelson Argamon and Ophir Frieder and David A. Grossman and Jefferson Heard}, journal={Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval}, year={2006} } ```
Augustvember/WokkaBot5
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-03-08T03:30:54Z
--- license: cc-by-sa-4.0 tags: - financial-sentiment-analysis - sentiment-analysis - sentence_50agree - generated_from_trainer - sentiment - finance datasets: - financial_phrasebank - Kaggle_Self_label - nickmuchi/financial-classification metrics: - accuracy - f1 - precision - recall widget: - text: The USD rallied by 10% last night example_title: Bullish Sentiment - text: >- Covid-19 cases have been increasing over the past few months impacting earnings for global firms example_title: Bearish Sentiment - text: the USD has been trending lower example_title: Mildly Bearish Sentiment model-index: - name: sec-bert-finetuned-finance-classification results: - task: name: Text Classification type: text-classification dataset: name: financial_phrasebank type: finance args: sentence_50agree metrics: - type: F1 name: F1 value: 0.8744 - type: accuracy name: accuracy value: 0.8755 language: - en --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sec-bert-finetuned-finance-classification This model is a fine-tuned version of [nlpaueb/sec-bert-base](https://huggingface.co/nlpaueb/sec-bert-base) on the sentence_50Agree [financial-phrasebank + Kaggle Dataset](https://huggingface.co/datasets/nickmuchi/financial-classification), a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive). The Kaggle dataset includes Covid-19 sentiment data and can be found here: [sentiment-classification-selflabel-dataset](https://www.kaggle.com/percyzheng/sentiment-classification-selflabel-dataset). It achieves the following results on the evaluation set: - Loss: 0.5277 - Accuracy: 0.8755 - F1: 0.8744 - Precision: 0.8754 - Recall: 0.8755 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.6005 | 0.99 | 71 | 0.3702 | 0.8478 | 0.8465 | 0.8491 | 0.8478 | | 0.3226 | 1.97 | 142 | 0.3172 | 0.8834 | 0.8822 | 0.8861 | 0.8834 | | 0.2299 | 2.96 | 213 | 0.3313 | 0.8814 | 0.8805 | 0.8821 | 0.8814 | | 0.1277 | 3.94 | 284 | 0.3925 | 0.8775 | 0.8771 | 0.8770 | 0.8775 | | 0.0764 | 4.93 | 355 | 0.4517 | 0.8715 | 0.8704 | 0.8717 | 0.8715 | | 0.0533 | 5.92 | 426 | 0.4851 | 0.8735 | 0.8728 | 0.8731 | 0.8735 | | 0.0363 | 6.9 | 497 | 0.5107 | 0.8755 | 0.8743 | 0.8757 | 0.8755 | | 0.0248 | 7.89 | 568 | 0.5277 | 0.8755 | 0.8744 | 0.8754 | 0.8755 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
Ayham/bert_gpt2_summarization_cnndm
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - generated_from_trainer datasets: - ncbi_disease metrics: - precision - recall - f1 - accuracy model-index: - name: bioBERT-NER-NCBI_disease results: - task: name: Token Classification type: token-classification dataset: name: ncbi_disease type: ncbi_disease args: ncbi_disease metrics: - name: Precision type: precision value: 0.8136200716845878 - name: Recall type: recall value: 0.8653113087674714 - name: F1 type: f1 value: 0.8386699507389163 - name: Accuracy type: accuracy value: 0.9850187265917603 widget: - text: "This model finds disease names such as Cholera, Cancer or COVID" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bioBERT-NER-NCBI_disease This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the ncbi_disease dataset. It achieves the following results on the evaluation set: - Loss: 0.0598 - Precision: 0.8136 - Recall: 0.8653 - F1: 0.8387 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0972 | 1.0 | 680 | 0.0688 | 0.7435 | 0.7624 | 0.7528 | 0.9794 | | 0.0397 | 2.0 | 1360 | 0.0508 | 0.7952 | 0.8780 | 0.8345 | 0.9840 | | 0.0118 | 3.0 | 2040 | 0.0598 | 0.8136 | 0.8653 | 0.8387 | 0.9850 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
Ayham/xlnet_roberta_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- language: en thumbnail: http://www.huggingtweets.com/feufillet-greatestquotes-hostagekiller/1646746104400/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1197820815636672513/JSCZmPDf_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1473236995497500675/FtwXDZld_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/378800000520968918/d38fd96468e9ba14c1f9f022eb0c4e61_400x400.png&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">sexy.funny.cute.pix & HUSSY2K. & Great Minds Quotes</div> <div style="text-align: center; font-size: 14px;">@feufillet-greatestquotes-hostagekiller</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from sexy.funny.cute.pix & HUSSY2K. & Great Minds Quotes. | Data | sexy.funny.cute.pix | HUSSY2K. | Great Minds Quotes | | --- | --- | --- | --- | | Tweets downloaded | 3091 | 3191 | 3200 | | Retweets | 149 | 865 | 0 | | Short tweets | 576 | 374 | 2 | | Tweets kept | 2366 | 1952 | 3198 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3afdee2s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @feufillet-greatestquotes-hostagekiller's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/25fcmxer) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/25fcmxer/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/feufillet-greatestquotes-hostagekiller') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Ayoola/cdial-yoruba-test
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers", "has_space" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
--- language: - tr tags: - roberta license: cc-by-nc-sa-4.0 datasets: - oscar --- # RoBERTa Turkish medium BPE 16k (uncased) Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is BPE. Vocabulary size is 16.7k. The details and performance comparisons can be found at this paper: https://arxiv.org/abs/2204.08832 The following code can be used for model loading and tokenization, example max length (514) can be changed: ``` model = AutoModel.from_pretrained([model_path]) #for sequence classification: #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes]) tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path]) tokenizer.mask_token = "[MASK]" tokenizer.cls_token = "[CLS]" tokenizer.sep_token = "[SEP]" tokenizer.pad_token = "[PAD]" tokenizer.unk_token = "[UNK]" tokenizer.bos_token = "[CLS]" tokenizer.eos_token = "[SEP]" tokenizer.model_max_length = 514 ``` ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.08832, doi = {10.48550/ARXIV.2204.08832}, url = {https://arxiv.org/abs/2204.08832}, author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Impact of Tokenization on Language Models: An Analysis for Turkish}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
Ayoola/wav2vec2-large-xlsr-turkish-demo-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - tr tags: - roberta license: cc-by-nc-sa-4.0 datasets: - oscar --- # RoBERTa Turkish medium Word-level 16k (uncased) Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is Word-level, which means text is split by white space. Vocabulary size is 16.7k. The details and performance comparisons can be found at this paper: https://arxiv.org/abs/2204.08832 The following code can be used for model loading and tokenization, example max length (514) can be changed: ``` model = AutoModel.from_pretrained([model_path]) #for sequence classification: #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes]) tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path]) tokenizer.mask_token = "[MASK]" tokenizer.cls_token = "[CLS]" tokenizer.sep_token = "[SEP]" tokenizer.pad_token = "[PAD]" tokenizer.unk_token = "[UNK]" tokenizer.bos_token = "[CLS]" tokenizer.eos_token = "[SEP]" tokenizer.model_max_length = 514 ``` ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.08832, doi = {10.48550/ARXIV.2204.08832}, url = {https://arxiv.org/abs/2204.08832}, author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Impact of Tokenization on Language Models: An Analysis for Turkish}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
Azaghast/DistilBART-SCP-ParaSummarization
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "BartForConditionalGeneration" ], "model_type": "bart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 142, "min_length": 56, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-100-pad-early-lit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-100-pad-early-lit This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1460 - Rouge1: 25.4944 - Rouge2: 7.9048 - Rougel: 16.2879 - Rougelsum: 20.883 - Gen Len: 64.3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 100 | 1.0390 | 27.3059 | 10.0672 | 19.7294 | 23.0611 | 62.1 | | No log | 2.0 | 200 | 1.1460 | 25.4944 | 7.9048 | 16.2879 | 20.883 | 64.3 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
Azura/data
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en tags: - distilbert - long context --- # LSG model **Transformers >= 4.23.1**\ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)** LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). * [Usage](#usage) * [Parameters](#parameters) * [Sparse selection type](#sparse-selection-type) * [Tasks](#tasks) * [Training global tokens](#training-global-tokens) This model is adapted from [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). Support encoder-decoder and causal masking but I didnt test it extensively.\ Implemented in PyTorch. ![attn](attn.png) ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ```python: from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096") ``` ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see config.json file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ```python: from transformers import AutoModel model = AutoModel.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096", trust_remote_code=True, num_global_tokens=16, block_size=64, sparse_block_size=64, attention_probs_dropout_prob=0.0 sparsity_factor=4, sparsity_type="none", mask_first_token=True ) ``` ## Sparse selection type There are 5 different sparse selection patterns. The best type is task dependent. \ Note that for sequences with length < 2*block_size, the type has no effect. * sparsity_type="norm", select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * sparsity_type="pooling", use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * sparsity_type="lsh", use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * sparsity_type="stride", use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * sparsity_type="block_stride", use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Fill mask example: ```python: from transformers import FillMaskPipeline, AutoModelForMaskedLM, AutoTokenizer model = AutoModelForMaskedLM.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096") SENTENCES = ["Paris is the <mask> of France.", "The goal of life is <mask>."] pipeline = FillMaskPipeline(model, tokenizer) output = pipeline(SENTENCES, top_k=1) output = [o[0]["sequence"] for o in output] > ['Paris is the capital of France.', 'The goal of life is happiness.'] ``` Classification example: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096", trust_remote_code=True, pool_with_global=True, # pool with a global token instead of first token ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096") SENTENCE = "This is a test for sequence classification. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", #pad_to_multiple_of=... # Optional truncation=True ) output = model(**token_ids) > SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) ``` ## Training global tokens To train global tokens and the classification head only: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096", trust_remote_code=True, pool_with_global=True, # pool with a global token instead of first token num_global_tokens=16 ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096") for name, param in model.named_parameters(): if "global_embeddings" not in name: param.requires_grad = False else: param.required_grad = True ```
Azuris/DialoGPT-medium-envy
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xls-r-300m-de-with-lm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-de-with-lm This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.9.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
BAHIJA/distilbert-base-uncased-finetuned-cola
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
36
null
--- tags: - conversational --- # My Awesome Model
BME-TMIT/foszt2oszt
[ "pytorch", "encoder-decoder", "text2text-generation", "hu", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
null
--- name: "K-POP" license: "mit" metrics: - MAE - PLCC - SRCC - R2 tags: - focus-prediction - microscopy - pytorch --- # K-POP: Predicting Distance to Focal Plane for Kato-Katz Prepared Microscopy Slides Using Deep Learning <a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a><a href="https://pytorchlightning.ai/"> <img alt="Lightning" src="https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white"></a> <a href="https://hydra.cc/"><img alt="Config: Hydra" src="https://img.shields.io/badge/Config-Hydra-89b8cd"></a> ## Description This repository contains the models and training pipeline for my master thesis. The main repository is hosted on [GitHub](https://github.com/13hannes11/master_thesis_code). The project structure is based on the template by [ashleve](https://github.com/ashleve/lightning-hydra-template). The metadata is stored in `data/focus150/`. The relevant files are `test_metadata.csv`, `train_metadata.csv` and `validation_metadata.csv`. Image data (of 150 x 150 px images) is not published together with this repository therefore training runs are not possible to do without it. The layout of the metadata files is as follows ```csv ,image_path,scan_uuid,study_id,focus_height,original_filename,stack_id,obj_name 0,31/b0d4005e-57d0-4516-a239-abe02a8d0a67/I02413_X009_Y014_Z5107_750_300.jpg,b0d4005e-57d0-4516-a239-abe02a8d0a67,31,-0.013672000000000017,I02413_X009_Y014_Z5107.jpg,1811661,schistosoma 1,31/274d8969-aa7c-4ac0-be60-e753579393ad/I01981_X019_Y014_Z4931_450_0.jpg,274d8969-aa7c-4ac0-be60-e753579393ad,31,-0.029296999999999962,I01981_X019_Y014_Z4931.jpg,1661371,schistosoma ... ``` ## How to run Train model with chosen experiment configuration from `configs/experiment/` ```bash python train.py experiment=focusResNet_150 ``` Train with hyperparameter search from `configs/hparams_search/` ```bash python train.py -m hparams_search=focusResNetMSE_150 ``` You can override any parameter from command line like this ```bash python train.py trainer.max_epochs=20 datamodule.batch_size=64 ``` ## Jupyter notebooks Figures and other evaluation code was run in Jupyter notebooks. These are available at `notebooks/`
BOON/electra_qa
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - masked-image-modeling - generated_from_trainer model-index: - name: dit-base-manuscripts results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dit-base-manuscripts This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the davanstrien/iiif_manuscripts_label_ge_50 dataset. It achieves the following results on the evaluation set: - Loss: 1.1266 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 1333 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1396 | 1.0 | 32 | 1.1261 | ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
BSC-LT/roberta-base-bne-capitel-pos
[ "pytorch", "roberta", "token-classification", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "capitel", "pos", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- language: en tags: - question_answering datasets: - z-uo/qasper-squad --- # roberta-base for QA with qasper Train from deepset/roberta-base-squad2. How to use by python code: ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline # Load model with pipeline model_name = "z-uo/roberta-qasper" nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) # Get predictions QA_input = { 'question': 'what they propose?', 'context': "In this paper, we provide an innovative contribution in the research domain dedicated to crop mapping by exploiting the of Sentinel-2 satellite images time series, with the specific aim to extract information on 'where and when' crops are grown. The final goal is to set up a workflow able to reliably identify (classify) the different crops that are grown in a given area by exploiting an end-to-end (3+2)D convolutional neural network (CNN) for semantic segmentation. The method also has the ambition to provide information, at pixel level, regarding the period in which a given crop is cultivated during the season. To this end, we propose a solution called Class Activation Interval (CAI) which allows us to interpret, for each pixel, the reasoning made by CNN in the classification determining in which time interval, of the input time series, the class is likely to be present or not. Our experiments, using a public domain dataset, show that the approach is able to accurately detect crop classes with an overall accuracy of about 93% and that the network can detect discriminatory time intervals in which crop is cultivated. These results have twofold importance: (i) demonstrate the ability of the network to correctly interpret the investigated physical process (i.e., bare soil condition, plant growth, senescence and harvesting according to specific cultivated variety) and (ii) provide further information to the end-user (e.g., the presence of crops and its temporal dynamics)." } res = nlp(QA_input) # Load model & tokenizer without pipeline model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ```
BSC-LT/roberta-base-bne-sqac
[ "pytorch", "roberta", "question-answering", "es", "dataset:BSC-TeMU/SQAC", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "qa", "question answering", "license:apache-2.0", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
{ 'max_seq_length': 384, 'batch_size': 24, 'learning_rate': {'val': 3e-5, 'schelduler': 'Linear'}, 'max_clip_norm': None, 'epochs': 2 }
BSC-LT/roberta-base-bne
[ "pytorch", "roberta", "fill-mask", "es", "dataset:bne", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
594
null
{ 'max_seq_length': 384, 'batch_size': 8, 'learning_rate': {'val': 5e-5, 'schelduler': 'Linear'}, 'max_clip_norm': None, 'epochs': 2 }
BSC-LT/roberta-large-bne-capitel-pos
[ "pytorch", "roberta", "token-classification", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "capitel", "pos", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- language: en license: apache-2.0 --- ## Overview Model included in a paper for modeling fine grained similarity between documents: **Title**: "Multi-Vector Models with Textual Guidance for Fine-Grained Scientific Document Similarity" **Authors**: Sheshera Mysore, Arman Cohan, Tom Hope **Paper**: https://arxiv.org/abs/2111.08366 **Github**: https://github.com/allenai/aspire **Note**: In the context of the paper, this model is referred to as `cosentbert` and represents a baseline sentence encoder for scientific text. The paper trains two versions of `cosentbert`, one for biomedical scientific text and another one for computer science text. This released model is trained on a union of all available data across scientific domains in the Semantic Scholar Open Research Corpus (S2ORC) dataset. This difference in training data leads to different, though close, evaluation performance than in the paper. ## Model Card **Model description:** This model represents a SciBERT based sentence encoder pre-trained for scientific text similarity. The model represents a sentence with a single vector obtained by reading the CLS token for the sentence. **Training data:** The model is trained on sets of co-citation context sentences referencing the same set of papers in a contrastive learning setup. These sentences can often be considered as paraphrases since co-citation sentences citing the same papers often describe similar aspects of the co-cited papers. The model is trained on 4.3 million sentence pairs of this type. In training the model negative examples for the contrastive loss are obtained as random in-batch negatives. An example pair of sentences used for training is as follows: > "The idea of distant supervision has been proposed and used widely in Relation Extraction (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) , where the source of labels is an external knowledge base." > > "Distant supervision [31, 43, 21, 49] generates training data automatically by aligning texts and a knowledge base (KB) (see Fig. 1 )." **Training procedure:** The model was trained with the Adam Optimizer and a learning rate of 2e-5 with 1000 warm-up steps followed by linear decay of the learning rate. The model training convergence is checked with the loss on a held out dev set consisting of co-citation context pairs. All the training data used was in English. **Intended uses & limitations:** This model is trained for sentence similarity tasks in scientific text and is best used as a sentence encoder. However with appropriate fine-tuning the model can also be used for other tasks such as classification. Note that about 50% of the training data consists of text from biomedical text and performance may be superior on text from bio-medicine and similar domains. **How to use:** This model can be used as a BERT model via the `transformers` library: ``` from transformers import AutoModel, AutoTokenizer aspire_sent = AutoModel.from_pretrained('allenai/aspire-sentence-embedder') aspire_tok = AutoTokenizer.from_pretrained('allenai/aspire-sentence-embedder') s='We present a new scientific document similarity model based on matching fine-grained aspects of texts.' inputs = aspire_tok(s, padding=True, truncation=True, return_tensors="pt", max_length=512) result = aspire_sent(\*\*inputs) clsrep = result.last_hidden_state[:,0,:] ``` OR via the `sentence_transformers` library: ``` from sentence_transformers import SentenceTransformer, models word_embedding_model = models.Transformer('allenai/aspire-sentence-embedder', max_seq_length=512) pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(), pooling_mode='cls') aspire_sb = SentenceTransformer(modules=[word_embedding_model, pooling_model]) clsrep_sb = sentbert_model.encode([s]) ``` **Variable and metrics:** Since the paper this model was trained for proposes methods for similarity of scientific abstracts, this model is evaluated on information retrieval datasets with document level queries. The datasets used for the paper include RELISH (biomedical/English), TRECCOVID (biomedical/English), and CSFCube (computer science/English). These are all detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). RELISH and TRECCOVID represent a abstract level retrieval task, where given a query scientific abstract the task requires the retrieval of relevant candidate abstracts. CSFCube presents a slightly different task and presents a set of finer-grained sentences in the abstract based on which a finer-grained retrieval must be made. This task represents the closest task to a sentence similarity task. In using this sentence level model for abstract level retrieval we rank documents by the minimal L2 distance between the sentences in the query and candidate abstract. **Evaluation results:** The released model `aspire-sentence-embedder` is compared against 1) `all-mpnet-base-v2` a sentence-bert model trained on ~1 billion training examples, 2) `paraphrase-TinyBERT-L6-v2` a sentence-bert model trained on paraphrase pairs, and 3) the `cosentbert` models used in our paper. | | CSFCube aggregated | CSFCube aggregated | TRECCOVID | TRECCOVID | RELISH | RELISH | |-------------------------------------------:|:------------------:|:-------:|:---------:|:-------:|:------:|:-------:| | | MAP | NDCG%20 | MAP | NDCG%20 | MAP | NDCG%20 | | `all-mpnet-base-v2` | 34.64 | 54.94 | 17.35 | 43.87 | 52.92 | 69.69 | | `paraphrase-TinyBERT-L6-v2` | 26.77 | 48.57 | 11.12 | 34.85 | 50.80 | 67.35 | | `cosentbert` | 28.95 | 50.68 | 12.80 | 38.07 | 50.04 | 66.35 | | `aspire-sentence-embedder` | 30.58 | 53.86 | 11.64 | 36.50 | 50.36 | 66.63 | The released model sees similar performance across datasets to the per-domain `cosentbert` models used in our paper (and reported above).
BSC-LT/roberta-large-bne
[ "pytorch", "roberta", "fill-mask", "es", "dataset:bne", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
24
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - natural_questions model-index: - name: distilbert-base-uncased-finetuned-natural-questions results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-natural-questions This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the natural_questions dataset. It achieves the following results on the evaluation set: - Loss: 0.6267 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 2.0532 | 1.0 | 5104 | 0.2393 | | 1.8912 | 2.0 | 10208 | 0.2284 | | 1.7854 | 3.0 | 15312 | 0.2357 | | 1.6856 | 4.0 | 20416 | 0.2487 | | 1.5918 | 5.0 | 25520 | 0.2743 | | 1.5067 | 6.0 | 30624 | 0.2586 | | 1.4323 | 7.0 | 35728 | 0.2763 | | 1.365 | 8.0 | 40832 | 0.2753 | | 1.3162 | 9.0 | 45936 | 0.3200 | | 1.281 | 10.0 | 51040 | 0.3127 | | 1.308 | 11.0 | 57104 | 0.2947 | | 1.241 | 12.0 | 62208 | 0.2941 | | 1.1391 | 13.0 | 67312 | 0.3103 | | 1.0334 | 14.0 | 72416 | 0.3694 | | 0.9538 | 15.0 | 77520 | 0.3658 | | 0.8749 | 16.0 | 82624 | 0.4009 | | 0.8154 | 17.0 | 87728 | 0.3672 | | 0.7533 | 18.0 | 92832 | 0.3675 | | 0.7079 | 19.0 | 97936 | 0.4611 | | 0.6658 | 20.0 | 103040 | 0.4222 | | 0.595 | 21.0 | 108144 | 0.4095 | | 0.5765 | 22.0 | 113248 | 0.4400 | | 0.5259 | 23.0 | 118352 | 0.5109 | | 0.4804 | 24.0 | 123456 | 0.4711 | | 0.4389 | 25.0 | 128560 | 0.5072 | | 0.4034 | 26.0 | 133664 | 0.5363 | | 0.374 | 27.0 | 138768 | 0.5460 | | 0.3434 | 28.0 | 143872 | 0.5627 | | 0.3181 | 29.0 | 148976 | 0.5657 | | 0.2971 | 30.0 | 154080 | 0.5819 | | 0.275 | 31.0 | 159184 | 0.5649 | | 0.2564 | 32.0 | 164288 | 0.6087 | | 0.2431 | 33.0 | 169392 | 0.6137 | | 0.2289 | 34.0 | 174496 | 0.6123 | | 0.2151 | 35.0 | 179600 | 0.5979 | | 0.2041 | 36.0 | 184704 | 0.6196 | | 0.1922 | 37.0 | 189808 | 0.6191 | | 0.1852 | 38.0 | 194912 | 0.6313 | | 0.1718 | 39.0 | 200016 | 0.6234 | | 0.1718 | 39.81 | 204160 | 0.6267 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0 - Datasets 1.18.4 - Tokenizers 0.11.6
Babelscape/rebel-large
[ "pytorch", "safetensors", "bart", "text2text-generation", "en", "dataset:Babelscape/rebel-dataset", "transformers", "seq2seq", "relation-extraction", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "has_space" ]
text2text-generation
{ "architectures": [ "BartForConditionalGeneration" ], "model_type": "bart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9,458
2022-03-08T20:17:30Z
--- license: apache-2.0 tags: - automatic-speech-recognition - google/xtreme_s - generated_from_trainer datasets: - xtreme_s metrics: - accuracy model-index: - name: xtreme_s_xlsr_minds14_fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xtreme_s_xlsr_minds14_fr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MINDS14.FR-FR dataset. It achieves the following results on the evaluation set: - Loss: 0.3922 - Accuracy: 0.9135 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9751 | 10.0 | 50 | 2.0203 | 0.3462 | | 0.4275 | 20.0 | 100 | 0.7434 | 0.7981 | | 0.2484 | 30.0 | 150 | 0.7686 | 0.8462 | | 0.0263 | 40.0 | 200 | 0.3922 | 0.9135 | | 0.0118 | 50.0 | 250 | 0.4859 | 0.9038 | ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4.dev0 - Tokenizers 0.11.6
Babysittingyoda/DialoGPT-small-familyguy
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- tags: - generated_from_trainer datasets: - xnli metrics: - accuracy model-index: - name: spanish-TinyBERT-betito-finetuned-xnli-es results: - task: name: Text Classification type: text-classification dataset: name: xnli type: xnli args: es metrics: - name: Accuracy type: accuracy value: 0.7475049900199601 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanish-TinyBERT-betito-finetuned-xnli-es This model is a fine-tuned version of [mrm8488/spanish-TinyBERT-betito](https://huggingface.co/mrm8488/spanish-TinyBERT-betito) on the xnli dataset. It achieves the following results on the evaluation set: - Loss: 0.7104 - Accuracy: 0.7475 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.50838112218154e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 13 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.7191 | 1.0 | 49399 | 0.6829 | 0.7112 | | 0.6323 | 2.0 | 98798 | 0.6527 | 0.7305 | | 0.5727 | 3.0 | 148197 | 0.6531 | 0.7465 | | 0.4964 | 4.0 | 197596 | 0.7079 | 0.7427 | | 0.4929 | 5.0 | 246995 | 0.7104 | 0.7475 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
Bagus/SER-LSSED
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
{ 'max_seq_length': 384, 'batch_size': 8, 'learning_rate': {'val': 5e-5, 'schelduler': 'Linear'}, 'max_clip_norm': None, 'epochs': 2 }
BalajiSathesh/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: - "uk" tags: - "ukrainian" - "masked-lm" - "ubertext" license: "cc-by-sa-4.0" pipeline_tag: "fill-mask" mask_token: "[MASK]" --- # roberta-base-ukrainian ## Model Description This is a RoBERTa model pre-trained on [Корпус UberText](https://lang.org.ua/uk/corpora/#anchor4). You can fine-tune `roberta-base-ukrainian` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-ukrainian-upos), dependency-parsing, and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-ukrainian") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-ukrainian") ```
Balgow/prod_desc
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
data origin https://recipenlg.cs.put.poznan.pl/dataset create environment ``` conda env create -v -f Recipe-Creator.yml conda activate Recipe-Creator ```
Banshee/dialoGPT-luke-small
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - "uk" tags: - "ukrainian" - "token-classification" - "pos" - "ubertext" - "dependency-parsing" datasets: - "universal_dependencies" - "ukr-models/Ukr-Synth" license: "cc-by-sa-4.0" pipeline_tag: "token-classification" widget: - text: "Свобода і незалежність – найголовніші умови успіху і процвітання." --- # roberta-base-ukrainian-upos ## Model Description This is a RoBERTa model pre-trained on Корпус UberText for POS-tagging and dependency-parsing, derived from [roberta-base-ukrainian](https://huggingface.co/KoichiYasuoka/roberta-base-ukrainian). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-ukrainian-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-ukrainian-upos") ``` or ``` import esupar nlp=esupar.load("KoichiYasuoka/roberta-base-ukrainian-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
Barleysack/klue-roberta-LSTM
[ "pytorch", "roberta", "transformers" ]
null
{ "architectures": [ "QAWithLSTMModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-demo This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4239 - Wer: 0.3508 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4093 | 4.0 | 500 | 1.2405 | 0.8685 | | 0.5597 | 8.0 | 1000 | 0.4538 | 0.4437 | | 0.2113 | 12.0 | 1500 | 0.4106 | 0.3749 | | 0.1188 | 16.0 | 2000 | 0.4609 | 0.3775 | | 0.0776 | 20.0 | 2500 | 0.4239 | 0.3508 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.10.3
BatuhanYilmaz/code-search-net-tokenizer1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - image-classification - generated_from_trainer datasets: - chest xrays widget: - src: https://drive.google.com/uc?id=1yqnhD4Wjt4Y_NGLtijTGGaaw9GL497kQ example_title: PNEUMONIA - src: https://drive.google.com/uc?id=1xjcIEDb8kuSd4wF44gCEgsc0PfRvs53m example_title: NORMAL metrics: - accuracy model-index: - name: vit-base-xray-pneumonia results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-xray-pneumonia This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the [chest-xray-pneumonia](https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia) dataset. It achieves the following results on the evaluation set: - Loss: 0.3387 - Accuracy: 0.9006 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1233 | 0.31 | 100 | 1.1662 | 0.6651 | | 0.0868 | 0.61 | 200 | 0.3387 | 0.9006 | | 0.1387 | 0.92 | 300 | 0.5297 | 0.8237 | | 0.1264 | 1.23 | 400 | 0.4566 | 0.8590 | | 0.0829 | 1.53 | 500 | 0.6832 | 0.8285 | | 0.0734 | 1.84 | 600 | 0.4886 | 0.8157 | | 0.0132 | 2.15 | 700 | 1.3639 | 0.7292 | | 0.0877 | 2.45 | 800 | 0.5258 | 0.8846 | | 0.0516 | 2.76 | 900 | 0.8772 | 0.8013 | | 0.0637 | 3.07 | 1000 | 0.4947 | 0.8558 | | 0.0022 | 3.37 | 1100 | 1.0062 | 0.8045 | | 0.0555 | 3.68 | 1200 | 0.7822 | 0.8285 | | 0.0405 | 3.99 | 1300 | 1.9288 | 0.6779 | | 0.0012 | 4.29 | 1400 | 1.2153 | 0.7981 | | 0.0034 | 4.6 | 1500 | 1.8931 | 0.7308 | | 0.0339 | 4.91 | 1600 | 0.9071 | 0.8590 | | 0.0013 | 5.21 | 1700 | 1.6266 | 0.7580 | | 0.0373 | 5.52 | 1800 | 1.5252 | 0.7676 | | 0.001 | 5.83 | 1900 | 1.2748 | 0.7869 | | 0.0005 | 6.13 | 2000 | 1.2103 | 0.8061 | | 0.0004 | 6.44 | 2100 | 1.3133 | 0.7981 | | 0.0004 | 6.75 | 2200 | 1.2200 | 0.8045 | | 0.0004 | 7.06 | 2300 | 1.2834 | 0.7933 | | 0.0004 | 7.36 | 2400 | 1.3080 | 0.7949 | | 0.0003 | 7.67 | 2500 | 1.3814 | 0.7917 | | 0.0004 | 7.98 | 2600 | 1.2853 | 0.7965 | | 0.0003 | 8.28 | 2700 | 1.3644 | 0.7933 | | 0.0003 | 8.59 | 2800 | 1.3137 | 0.8013 | | 0.0003 | 8.9 | 2900 | 1.3507 | 0.7997 | | 0.0003 | 9.2 | 3000 | 1.3751 | 0.7997 | | 0.0003 | 9.51 | 3100 | 1.3884 | 0.7981 | | 0.0003 | 9.82 | 3200 | 1.3831 | 0.7997 | ## Example Images #### Pneumonia Chest X-Ray ![Pneumonia](https://drive.google.com/uc?id=1yqnhD4Wjt4Y_NGLtijTGGaaw9GL497kQ) #### Normal Chest X-Ray ![Normal](https://drive.google.com/uc?id=1xjcIEDb8kuSd4wF44gCEgsc0PfRvs53m) ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
Bharathdamu/wav2vec2-model-hindi-stt
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - tr tags: - roberta license: cc-by-nc-sa-4.0 datasets: - oscar --- # RoBERTa Turkish medium WordPiece 28k (uncased) Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is WordPiece. Vocabulary size is 28.6k. The details and performance comparisons can be found at this paper: https://arxiv.org/abs/2204.08832 The following code can be used for model loading and tokenization, example max length (514) can be changed: ``` model = AutoModel.from_pretrained([model_path]) #for sequence classification: #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes]) tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path]) tokenizer.mask_token = "[MASK]" tokenizer.cls_token = "[CLS]" tokenizer.sep_token = "[SEP]" tokenizer.pad_token = "[PAD]" tokenizer.unk_token = "[UNK]" tokenizer.bos_token = "[CLS]" tokenizer.eos_token = "[SEP]" tokenizer.model_max_length = 514 ``` ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.08832, doi = {10.48550/ARXIV.2204.08832}, url = {https://arxiv.org/abs/2204.08832}, author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Impact of Tokenization on Language Models: An Analysis for Turkish}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
Bharathdamu/wav2vec2-model-hindibhasha
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en thumbnail: http://www.huggingtweets.com/aniraster_/1646816595677/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1460097593015472141/Yt6YwEU1_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Aniraster</div> <div style="text-align: center; font-size: 14px;">@aniraster_</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Aniraster. | Data | Aniraster | | --- | --- | | Tweets downloaded | 2581 | | Retweets | 169 | | Short tweets | 660 | | Tweets kept | 1752 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3nr4gbjn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aniraster_'s tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3g7h1bov) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3g7h1bov/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/aniraster_') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Bhumika/roberta-base-finetuned-sst2
[ "pytorch", "tensorboard", "roberta", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "model-index" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
85
null
--- language: - tr tags: - roberta license: cc-by-nc-sa-4.0 datasets: - oscar --- # RoBERTa Turkish medium WordPiece 44k (uncased) Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is WordPiece. Vocabulary size is 44.5k. The details can be found at this paper: https://arxiv.org/abs/2204.08832 The following code can be used for model loading and tokenization, example max length (514) can be changed: ``` model = AutoModel.from_pretrained([model_path]) #for sequence classification: #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes]) tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path]) tokenizer.mask_token = "[MASK]" tokenizer.cls_token = "[CLS]" tokenizer.sep_token = "[SEP]" tokenizer.pad_token = "[PAD]" tokenizer.unk_token = "[UNK]" tokenizer.bos_token = "[CLS]" tokenizer.eos_token = "[SEP]" tokenizer.model_max_length = 514 ``` ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.08832, doi = {10.48550/ARXIV.2204.08832}, url = {https://arxiv.org/abs/2204.08832}, author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Impact of Tokenization on Language Models: An Analysis for Turkish}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
Biasface/DDDC2
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- tags: autonlp language: zh widget: - text: "I love AutoNLP 🤗" datasets: - kyleinincubated/autonlp-data-abbb co2_eq_emissions: 2.22514962526191 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 622117836 - CO2 Emissions (in grams): 2.22514962526191 ## Validation Metrics - Loss: 1.2368708848953247 - Accuracy: 0.7973333333333333 - Macro F1: 0.46009076588978487 - Micro F1: 0.7973333333333333 - Weighted F1: 0.7712349116681224 - Macro Precision: 0.4527155928883903 - Micro Precision: 0.7973333333333333 - Weighted Precision: 0.7610710955220162 - Macro Recall: 0.4947868561369568 - Micro Recall: 0.7973333333333333 - Weighted Recall: 0.7973333333333333 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/kyleinincubated/autonlp-abbb-622117836 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("kyleinincubated/autonlp-abbb-622117836", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("kyleinincubated/autonlp-abbb-622117836", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
BigSalmon/GPTHeHe
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2022-03-09T12:00:46Z
--- language: - tr tags: - roberta license: cc-by-nc-sa-4.0 datasets: - oscar --- # RoBERTa Turkish medium BPE 28k (uncased) Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is BPE. Vocabulary size is 28.6k. The details and performance comparisons can be found at this paper: https://arxiv.org/abs/2204.08832 The following code can be used for model loading and tokenization, example max length (514) can be changed: ``` model = AutoModel.from_pretrained([model_path]) #for sequence classification: #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes]) tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path]) tokenizer.mask_token = "[MASK]" tokenizer.cls_token = "[CLS]" tokenizer.sep_token = "[SEP]" tokenizer.pad_token = "[PAD]" tokenizer.unk_token = "[UNK]" tokenizer.bos_token = "[CLS]" tokenizer.eos_token = "[SEP]" tokenizer.model_max_length = 514 ``` ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.08832, doi = {10.48550/ARXIV.2204.08832}, url = {https://arxiv.org/abs/2204.08832}, author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Impact of Tokenization on Language Models: An Analysis for Turkish}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
BigSalmon/GPTNeo350MInformalToFormalLincoln
[ "pytorch", "gpt_neo", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2022-03-09T12:04:35Z
--- language: - tr tags: - roberta license: cc-by-nc-sa-4.0 datasets: - oscar --- # RoBERTa Turkish medium BPE 44k (uncased) Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is BPE. Vocabulary size is 44.5k. The details and performance comparisons can be found at this paper: https://arxiv.org/abs/2204.08832 The following code can be used for model loading and tokenization, example max length (514) can be changed: ``` model = AutoModel.from_pretrained([model_path]) #for sequence classification: #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes]) tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path]) tokenizer.mask_token = "[MASK]" tokenizer.cls_token = "[CLS]" tokenizer.sep_token = "[SEP]" tokenizer.pad_token = "[PAD]" tokenizer.unk_token = "[UNK]" tokenizer.bos_token = "[CLS]" tokenizer.eos_token = "[SEP]" tokenizer.model_max_length = 514 ``` ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.08832, doi = {10.48550/ARXIV.2204.08832}, url = {https://arxiv.org/abs/2204.08832}, author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Impact of Tokenization on Language Models: An Analysis for Turkish}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
BigSalmon/GPTNeo350MInformalToFormalLincoln3
[ "pytorch", "gpt_neo", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2022-03-09T12:18:19Z
--- language: - tr tags: - roberta license: cc-by-nc-sa-4.0 datasets: - oscar --- # RoBERTa Turkish medium Morph-level 7k (uncased) Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is Morph-level, which means that text is split according to a Turkish morphological analyzer (Zemberek). Vocabulary size is 7.5k. ## Note that this model needs a preprocessing step before running, because the tokenizer file is not a morphological anaylzer. That is, the test dataset can not be split into morphemes with the tokenizer file. The user needs to process any test dataset by a Turkish morphological analyzer (Zemberek in this case) before running evaluation. The details and performance comparisons can be found at this paper: https://arxiv.org/abs/2204.08832 The following code can be used for model loading and tokenization, example max length (514) can be changed: ``` model = AutoModel.from_pretrained([model_path]) #for sequence classification: #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes]) tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path]) tokenizer.mask_token = "[MASK]" tokenizer.cls_token = "[CLS]" tokenizer.sep_token = "[SEP]" tokenizer.pad_token = "[PAD]" tokenizer.unk_token = "[UNK]" tokenizer.bos_token = "[CLS]" tokenizer.eos_token = "[SEP]" tokenizer.model_max_length = 514 ``` ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.08832, doi = {10.48550/ARXIV.2204.08832}, url = {https://arxiv.org/abs/2204.08832}, author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Impact of Tokenization on Language Models: An Analysis for Turkish}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
BigSalmon/InformalToFormalLincoln16
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2022-03-09T12:47:06Z
--- language: - tr tags: - roberta license: cc-by-nc-sa-4.0 datasets: - oscar --- # RoBERTa Turkish medium Morph-level 66k (uncased) Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is Morph-level, which means that text is split according to a Turkish morphological analyzer (Zemberek). Vocabulary size is 64.2k. ## Note that this model needs a preprocessing step before running, because the tokenizer file is not a morphological anaylzer. That is, the test dataset can not be split into morphemes with the tokenizer file. The user needs to process any test dataset by a Turkish morphological analyzer (Zemberek in this case) before running evaluation. The details and performance comparisons can be found at this paper: https://arxiv.org/abs/2204.08832 The following code can be used for model loading and tokenization, example max length (514) can be changed: ``` model = AutoModel.from_pretrained([model_path]) #for sequence classification: #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes]) tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path]) tokenizer.mask_token = "[MASK]" tokenizer.cls_token = "[CLS]" tokenizer.sep_token = "[SEP]" tokenizer.pad_token = "[PAD]" tokenizer.unk_token = "[UNK]" tokenizer.bos_token = "[CLS]" tokenizer.eos_token = "[SEP]" tokenizer.model_max_length = 514 ``` ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.08832, doi = {10.48550/ARXIV.2204.08832}, url = {https://arxiv.org/abs/2204.08832}, author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Impact of Tokenization on Language Models: An Analysis for Turkish}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
BigSalmon/InformalToFormalLincoln19
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
2022-03-09T13:17:25Z
--- language: - tr tags: - roberta license: cc-by-nc-sa-4.0 datasets: - oscar --- # RoBERTa Turkish medium Word-level 7k (uncased) Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is Word-level, which means text is split by white space. Vocabulary size is 7.5k. The details and performance comparisons can be found at this paper: https://arxiv.org/abs/2204.08832 The following code can be used for model loading and tokenization, example max length (514) can be changed: ``` model = AutoModel.from_pretrained([model_path]) #for sequence classification: #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes]) tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path]) tokenizer.mask_token = "[MASK]" tokenizer.cls_token = "[CLS]" tokenizer.sep_token = "[SEP]" tokenizer.pad_token = "[PAD]" tokenizer.unk_token = "[UNK]" tokenizer.bos_token = "[CLS]" tokenizer.eos_token = "[SEP]" tokenizer.model_max_length = 514 ``` ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.08832, doi = {10.48550/ARXIV.2204.08832}, url = {https://arxiv.org/abs/2204.08832}, author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Impact of Tokenization on Language Models: An Analysis for Turkish}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
BigSalmon/InformalToFormalLincoln20
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2022-03-09T13:26:34Z
--- language: - tr tags: - roberta license: cc-by-nc-sa-4.0 datasets: - oscar --- # RoBERTa Turkish medium Word-level 28k (uncased) Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is Word-level, which means text is split by white space. Vocabulary size is 28.6k. The details and performance comparisons can be found at this paper: https://arxiv.org/abs/2204.08832 The following code can be used for model loading and tokenization, example max length (514) can be changed: ``` model = AutoModel.from_pretrained([model_path]) #for sequence classification: #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes]) tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path]) tokenizer.mask_token = "[MASK]" tokenizer.cls_token = "[CLS]" tokenizer.sep_token = "[SEP]" tokenizer.pad_token = "[PAD]" tokenizer.unk_token = "[UNK]" tokenizer.bos_token = "[CLS]" tokenizer.eos_token = "[SEP]" tokenizer.model_max_length = 514 ``` ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.08832, doi = {10.48550/ARXIV.2204.08832}, url = {https://arxiv.org/abs/2204.08832}, author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Impact of Tokenization on Language Models: An Analysis for Turkish}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
BigSalmon/InformalToFormalLincoln22
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - generated_from_keras_callback model-index: - name: beto_stars results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # beto_stars This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.8954 - Train Accuracy: 0.6248 - Validation Loss: 1.1278 - Validation Accuracy: 0.5148 - Epoch: 14 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-07, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 1.5935 | 0.2626 | 1.5553 | 0.3482 | 0 | | 1.5072 | 0.3782 | 1.4289 | 0.4188 | 1 | | 1.3590 | 0.4312 | 1.2929 | 0.4406 | 2 | | 1.2463 | 0.4628 | 1.2197 | 0.4682 | 3 | | 1.1754 | 0.4970 | 1.1785 | 0.4830 | 4 | | 1.1299 | 0.5114 | 1.1533 | 0.4908 | 5 | | 1.0847 | 0.5362 | 1.1398 | 0.5006 | 6 | | 1.0492 | 0.5440 | 1.1273 | 0.5046 | 7 | | 1.0278 | 0.5592 | 1.1237 | 0.5034 | 8 | | 1.0031 | 0.5690 | 1.1171 | 0.5118 | 9 | | 0.9798 | 0.5712 | 1.1163 | 0.5120 | 10 | | 0.9598 | 0.5894 | 1.1180 | 0.5114 | 11 | | 0.9406 | 0.5964 | 1.1219 | 0.5122 | 12 | | 0.9178 | 0.6104 | 1.1269 | 0.5150 | 13 | | 0.8954 | 0.6248 | 1.1278 | 0.5148 | 14 | ### Framework versions - Transformers 4.17.0 - TensorFlow 2.8.0 - Datasets 1.18.4 - Tokenizers 0.11.6
BigSalmon/InformalToFormalLincoln24
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2022-03-09T13:48:40Z
--- language: - tr tags: - roberta license: cc-by-nc-sa-4.0 datasets: - oscar --- # RoBERTa Turkish medium Word-level 66k (uncased) Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is Word-level, which means text is split by white space. Vocabulary size is 66.7k. The details and performance comparisons can be found at this paper: https://arxiv.org/abs/2204.08832 The following code can be used for model loading and tokenization, example max length (514) can be changed: ``` model = AutoModel.from_pretrained([model_path]) #for sequence classification: #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes]) tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path]) tokenizer.mask_token = "[MASK]" tokenizer.cls_token = "[CLS]" tokenizer.sep_token = "[SEP]" tokenizer.pad_token = "[PAD]" tokenizer.unk_token = "[UNK]" tokenizer.bos_token = "[CLS]" tokenizer.eos_token = "[SEP]" tokenizer.model_max_length = 514 ``` ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.08832, doi = {10.48550/ARXIV.2204.08832}, url = {https://arxiv.org/abs/2204.08832}, author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Impact of Tokenization on Language Models: An Analysis for Turkish}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
BigSalmon/MrLincoln
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2022-03-09T15:28:56Z
sberbank-ai/ruRoberta-large fine-tuned for Russian Artificial Text Detection shared task
BigSalmon/MrLincoln125MNeo
[ "pytorch", "tensorboard", "gpt_neo", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- language: en license: mit library_name: PyTorch tags: - computer vision - GAN datasets: - multi-pie --- Face Frontalization is a generative computer vision task in which the model takes a photo of a person's head taken at an angle between -90 and 90 degrees, and produces an image of what that person's frontal (i.e. 0 degree) view of the face might look like. The present model was first released in [this repository](https://github.com/scaleway/frontalization) by [Scaleway](https://www.scaleway.com/), a European cloud provider originating from France. It has been previously discussed in a [Scaleway blog post](https://blog.scaleway.com/gpu-instances-using-deep-learning-to-obtain-frontal-rendering-of-facial-images/) and presented at [the DataXDay conference in Paris](https://www.youtube.com/watch?v=aL7rhJz8mAI). The model's GAN architecture was inspired by [the work of R. Huang et al](https://arxiv.org/abs/1704.04086). # Model description The Face Frontalization model is the Generator part of a [GAN](https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf) that was trained in a supervised fashion on profile-frontal image pairs. The Discriminator was based on a fairly standard [DCGAN](https://arxiv.org/abs/1511.06434) architecture, where the input is a 128x128x3 image that is processed through multiple convolutional layers, to be classified as either Real or Fake. The Generator had to be modified in order to fit the supervised learning scenario. It consists of convolutional layers (the Encoder of the input image), followed by a 512-dimensional hidden representation that is then fed into the Decoder made up of deconvolutional layers, which produces the output image. For more details on the model's architecture, see [this blog post](https://blog.scaleway.com/gpu-instances-using-deep-learning-to-obtain-frontal-rendering-of-facial-images/). # Intended uses & limitations The present Face Frontalization model was not intended to represent the state of the art for this machine learning task. Instead, the goals were: (a) to demonstrate the benefits of using a GAN for supervised machine learning tasks (whereas the original GAN is an unsupervised generative algorithm; see [this conference talk](https://www.youtube.com/watch?v=aL7rhJz8mAI) for more details); (b) to show how a complex generative computer vision project can be accomplished on a [Scaleway cloud RENDER-S instance](https://www.scaleway.com/en/gpu-instances/) within ~ a day. # How to use The Face Frontalization model is a saved Pytorch model that can be loaded provided the included *network* package is present in the directory. It takes in 3-channel color images resized to 128x128 pixels in the form of [N, 3, 128, 128] tensors (where N is the size of the batch). Ideally, the input images should be closely-cropped photos of faces, taken in good lighting conditions. Here is how the model can be used for inference with a *gradio* image widget, e.g. in a Jupyter notebook: ``` import gradio as gr import numpy as np import torch from torchvision import transforms from torch.autograd import Variable from PIL import Image import matplotlib.pyplot as plt import warnings warnings.filterwarnings('ignore') # Load the saved Frontalization generator model saved_model = torch.load("./generator_v0.pt", map_location=torch.device('cpu')) def frontalize(image): # Convert the test image to a [1, 3, 128, 128]-shaped torch tensor # (as required by the frontalization model) preprocess = transforms.Compose((transforms.ToPILImage(), transforms.Resize(size = (128, 128)), transforms.ToTensor())) input_tensor = torch.unsqueeze(preprocess(image), 0) # Use the saved model to generate an output (whose values go between -1 and 1, # and this will need to get fixed before the output is displayed) generated_image = saved_model(Variable(input_tensor.type('torch.FloatTensor'))) generated_image = generated_image.detach().squeeze().permute(1, 2, 0).numpy() generated_image = (generated_image + 1.0) / 2.0 return generated_image iface = gr.Interface(frontalize, gr.inputs.Image(type="numpy"), "image") iface.launch() ``` # Limitations and bias As mentioned in the **Intended uses** section, the present model's performance is not intended to compete with the state of the art. Additionally, as the training data had a disproportionately high number of images of caucasian and asian males in their 20s, the model does not perform as well when supplied with images of people not belonging to this limited demographic. # Training data The present model was trained on [the CMU Multi-PIE Face Database that is available commercially](https://www.cs.cmu.edu/afs/cs/project/PIE/MultiPie/Multi-Pie/Home.html). The input images were closely cropped to include the face of a person photographed at an angle between -90 and 90 degrees. The target frontal images were cropped and aligned so that the center of the person's left eye was at the same relative position in all of them. Having a precise alignment for the target images turned out to play a key role in the training of the model. # Training procedure The training of the model was performed in a similar manner to that of a regular unsupervised [GAN](https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf), except that in addition to the binary cross entropy loss for the Discriminator, a pixelwise loss function was introduced for the Generator (see [the blog post](https://blog.scaleway.com/gpu-instances-using-deep-learning-to-obtain-frontal-rendering-of-facial-images/) for details). The exact weights given to the L1 and L2 pixelwise losses, as well as the BCE (GAN) loss were as follows: ``` L1_factor = 1 L2_factor = 1 GAN_factor = 0.001 ``` The model was trained for 18 epochs, with the training batch size equal to 30. The following optimizers were used for the Discriminator and the Generator: ``` optimizerD = optim.Adam(netD.parameters(), lr = 0.0002, betas = (0.5, 0.999)) optimizerG = optim.Adam(netG.parameters(), lr = 0.0002, betas = (0.5, 0.999), eps = 1e-8) ``` # Evaluation results GANs are notoriously difficult to train, with the losses for the Discriminator and the Generator often failing to converge even when producing what looks to be a highly realistic result to a human eye. The pixelwise loss for the test images also serves as a poor indicator of the model's performance because any variation in the lighting between the real target photo and the generated image could result in a deceptively high discrepancy between the two. The best evaluation method that remains is the manual inspection of the generated results. We have found that the present model performs reasonably well on the test data from the CMU Multi-PIE Face Database (naturally, all of the photos of the individuals included in the test set were removed from training): ![test examples](https://github.com/scaleway/frontalization/raw/master/pretrained/test-Pie.jpg) (Top row: inputs; middle row: model outputs; bottom row: ground truth images)
BigSalmon/MrLincoln3
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
17
null
--- license: apache-2.0 --- ## UK & Ireland Accent Classification Model This model classifies UK & Ireland accents using feature extraction from [Yamnet](https://tfhub.dev/google/yamnet/1). ### Yamnet Model Yamnet is an audio event classifier trained on the AudioSet dataset to predict audio events from the AudioSet ontology. It is available on TensorFlow Hub. Yamnet accepts a 1-D tensor of audio samples with a sample rate of 16 kHz. As output, the model returns a 3-tuple: - Scores of shape `(N, 521)` representing the scores of the 521 classes. - Embeddings of shape `(N, 1024)`. - The log-mel spectrogram of the entire audio frame. We will use the embeddings, which are the features extracted from the audio samples, as the input to our dense model. For more detailed information about Yamnet, please refer to its [TensorFlow Hub](https://tfhub.dev/google/yamnet/1) page. ### Dense Model The dense model that we used consists of: - An input layer which is embedding output of the Yamnet classifier. - 4 dense hidden layers and 4 dropout layers. - An output dense layer. <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details> --- ## Results The model achieved the following results: Results | Training | Validation -----------|-----------|------------ Accuracy | 55% | 51% AUC | 0.9090 | 0.8911 d-prime | 1.887 | 1.743 And the confusion matrix for the validation set is: ![Validation Confusion Matrix](./confusion_matrix.png) --- ## Dataset The dataset used is the [Crowdsourced high-quality UK and Ireland English Dialect speech data set](https://openslr.org/83/) which consists of a total of 17,877 high-quality audio wav files. This dataset includes over 31 hours of recording from 120 vounteers who self-identify as native speakers of Southern England, Midlands, Northern England, Wales, Scotland and Ireland. For more info, please refer to the above link or to the following paper: [Open-source Multi-speaker Corpora of the English Accents in the British Isles](https://aclanthology.org/2020.lrec-1.804.pdf) --- ## How to use Having already installed `huggingface_hub` using: `pip install -U -q huggingface_hub` Use the following in your code: `from huggingface_hub import from_pretrained_keras` `model = from_pretrained_keras("fbadine/uk_ireland_accent_classification")` --- ## Demo A demo is available in [HuggingFace Spaces](https://huggingface.co/spaces/fbadine/uk_ireland_accent_classification)
BigSalmon/ParaphraseParentheses
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # Van Van model trained on imagenet-1k. It was introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [this repository](https://github.com/Visual-Attention-Network/VAN-Classification). Disclaimer: The team releasing Van did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description This paper introduces a new attention layer based on convolution operations able to capture both local and distant relationships. This is done by combining normal and large kernel convolution layers. The latter uses a dilated convolution to capture distant correlations. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/van_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=van) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, VanForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("Visual-Attention-Network/van-base") >>> model = VanForImageClassification.from_pretrained("Visual-Attention-Network/van-base") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> # model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) tabby, tabby cat ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/van).
BigSalmon/Points
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
An AI model that, given a statement, generates a question that would have likely resulted in said statement. Created for a Senior Project at Calvin University.
BlueGamerBeast/DialoGPT-small-joshua
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer model-index: - name: bart-large-cnn-100k-lit-evalMA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-100k-lit-evalMA This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.7715 - eval_rouge1: 29.7037 - eval_rouge2: 15.0234 - eval_rougeL: 23.5169 - eval_rougeLsum: 26.8682 - eval_gen_len: 68.1209 - eval_runtime: 28898.0987 - eval_samples_per_second: 0.346 - eval_steps_per_second: 0.346 - epoch: 1.0 - step: 100000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
Bosio/full-sentence-distillroberta3-finetuned-wikitext2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-wiki results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-wiki This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7509 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9294 | 1.0 | 2319 | 1.7732 | | 1.8219 | 2.0 | 4638 | 1.7363 | | 1.7957 | 3.0 | 6957 | 1.7454 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0 - Datasets 1.18.3 - Tokenizers 0.11.0
BossLee/t5-gec
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
6
null
--- tags: autonlp language: zh widget: - text: "I love AutoNLP 🤗" datasets: - kyleinincubated/autonlp-data-cat33 co2_eq_emissions: 1.2490471218570545 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 624317932 - CO2 Emissions (in grams): 1.2490471218570545 ## Validation Metrics - Loss: 0.5579860806465149 - Accuracy: 0.8717391304347826 - Macro F1: 0.6625543939916455 - Micro F1: 0.8717391304347827 - Weighted F1: 0.8593303742671491 - Macro Precision: 0.7214757380849891 - Micro Precision: 0.8717391304347826 - Weighted Precision: 0.8629042654788023 - Macro Recall: 0.6540187758140144 - Micro Recall: 0.8717391304347826 - Weighted Recall: 0.8717391304347826 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/kyleinincubated/autonlp-cat33-624317932 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("kyleinincubated/autonlp-cat33-624317932", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("kyleinincubated/autonlp-cat33-624317932", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
Broadus20/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-sports-scouting results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-sports-scouting This model is a fine-tuned version of [amanm27/bert-base-uncased-sports](https://huggingface.co/amanm27/bert-base-uncased-sports) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5127 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 378 | 1.7194 | | 2.0165 | 2.0 | 756 | 1.5709 | | 1.6935 | 3.0 | 1134 | 1.5282 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0 - Datasets 1.18.3 - Tokenizers 0.11.0
Broadus20/DialoGPT-small-joshua
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-wiki-sports-scouting results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-wiki-sports-scouting This model is a fine-tuned version of [amanm27/bert-base-uncased-wiki-sports](https://huggingface.co/amanm27/bert-base-uncased-wiki-sports) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4909 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 378 | 1.6816 | | 1.9594 | 2.0 | 756 | 1.5421 | | 1.66 | 3.0 | 1134 | 1.5022 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0 - Datasets 1.18.3 - Tokenizers 0.11.0
BumBelDumBel/TRUMP
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: apache-2.0 tags: - translation - Fairseq widget: - text: "<2li> Let us generate some Livonian text!" --- [Fairseq](https://github.com/pytorch/fairseq) model for translating between English, Estonian, Latvian and Livonian. Subword units created with [SentencePiece](https://github.com/google/sentencepiece). To specify the target language to translate into, prepend one of the language code tags to the source sentences: ``` <2en> Šis teikums jātulko angļu valodā <2et> This sentence should be translated into Estonian <2lv> This sentence should be translated into Latvian <2li> This sentence should be translated into Livonian ``` This should be done after applying SentencePiece.
BumBelDumBel/ZORK_AI_SCIFI
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: name: wav2vec2-base-finetuned-ks --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-ks This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0029 - Accuracy: 0.9997 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0037 | 1.0 | 400 | 0.0054 | 0.9991 | | 0.0007 | 2.0 | 800 | 0.0029 | 0.9997 | | 0.0004 | 3.0 | 1200 | 0.0028 | 0.9997 | | 0.0003 | 4.0 | 1600 | 0.0029 | 0.9997 | | 0.0003 | 5.0 | 2000 | 0.0028 | 0.9997 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.10.3
BunakovD/sd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8594910162670748 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1348 - F1: 0.8595 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2556 | 1.0 | 525 | 0.1629 | 0.8218 | | 0.1309 | 2.0 | 1050 | 0.1378 | 0.8522 | | 0.0812 | 3.0 | 1575 | 0.1348 | 0.8595 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
Buntan/xlm-roberta-base-finetuned-marc-en
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Моя модель умеет распознавать ценники и сравнивать с ценами конкурентов.
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16,451
null
--- language: English task: extractive question answering datasets: SQuAD 2.0 tags: - bert-base --- # Model Description This model is for English extractive question answering. It is based on the [bert-base-cased](https://huggingface.co/bert-base-uncased) model, and it is case-sensitive: it makes a difference between english and English. # Training data [English SQuAD v2.0](https://rajpurkar.github.io/SQuAD-explorer/) # How to use You can use it directly from the [🤗 Transformers](https://github.com/huggingface/transformers) library with a pipeline: ``` python >>> from transformers.pipelines import pipeline >>> from transformers import AutoTokenizer, AutoModelForQuestionAnswering >>> tokenizer = AutoTokenizer.from_pretrained("zhufy/squad-en-bert-base") >>> model = AutoModelForQuestionAnswering.from_pretrained("zhufy/squad-en-bert-base") >>> nlp = pipeline("question-answering", model=model, tokenizer=tokenizer) >>> context = "A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do." >>> question = "What are two basic primary resources used to guage complexity?" >>> inputs = {"question": question, "context":context } >>> nlp(inputs) {'score': 0.8589141368865967, 'start': 305, 'end': 321, 'answer': 'time and storage'} ```
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
18
null
--- tags: - conversational --- # My Awesome Model
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
71
2022-03-10T10:26:05Z
--- license: gpl-3.0 --- Latvian BERT-base-cased model. ``` @inproceedings{Znotins-Barzdins:2020:BalticHLT, author = "A. Znotins and G. Barzdins", title = "LVBERT: Transformer-Based Model for Latvian Language Understanding", year = 2020, booktitle = "Human Language Technologies - The Baltic Perspective", publisher = "IOS Press", volume = 328, pages = "111-115", doi = "10.3233/FAIA200610", url = "http://ebooks.iospress.nl/volumearticle/55531" } ``` Please use the following text to cite this item or export to a predefined format: Znotiņš, Artūrs, 2020, LVBERT - Latvian BERT, CLARIN-LV digital library at IMCS, University of Latvia, http://hdl.handle.net/20.500.12574/43
CAMeL-Lab/bert-base-arabic-camelbert-da-ner
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
42
null
--- language: - ab tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: 207.6055 - Wer: 1.5475 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.5.dev0 - Tokenizers 0.11.6
CAMeL-Lab/bert-base-arabic-camelbert-da-poetry
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:1905.05700", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
37
null
--- tags: - generated_from_trainer model-index: - name: tmp_trainer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tmp_trainer This model is a fine-tuned version of [pong/opus-mt-en-mul-finetuned-en-to-th](https://huggingface.co/pong/opus-mt-en-mul-finetuned-en-to-th) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
--- language: - "de" tags: - "german" - "token-classification" - "pos" - "dependency-parsing" datasets: - "universal_dependencies" license: "mit" pipeline_tag: "token-classification" --- # bert-base-german-upos ## Model Description This is a BERT model pre-trained with [UD_German-HDT](https://github.com/UniversalDependencies/UD_German-HDT) for POS-tagging and dependency-parsing, derived from [gbert-base](https://huggingface.co/deepset/gbert-base). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-german-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-german-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/bert-base-german-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
CAMeL-Lab/bert-base-arabic-camelbert-da
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
449
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - pub_med_summarization_dataset metrics: - rouge model-index: - name: bigbird-pegasus-large-bigpatent-finetuned-pubMed results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: pub_med_summarization_dataset type: pub_med_summarization_dataset args: document metrics: - name: Rouge1 type: rouge value: 45.0851 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bigbird-pegasus-large-bigpatent-finetuned-pubMed This model is a fine-tuned version of [google/bigbird-pegasus-large-bigpatent](https://huggingface.co/google/bigbird-pegasus-large-bigpatent) on the pub_med_summarization_dataset dataset. It achieves the following results on the evaluation set: - Loss: 1.5403 - Rouge1: 45.0851 - Rouge2: 19.5488 - Rougel: 27.391 - Rougelsum: 41.112 - Gen Len: 231.608 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.1198 | 1.0 | 500 | 1.6285 | 43.0579 | 18.1792 | 26.421 | 39.0769 | 214.924 | | 1.6939 | 2.0 | 1000 | 1.5696 | 44.0679 | 18.9331 | 26.84 | 40.0684 | 222.814 | | 1.6195 | 3.0 | 1500 | 1.5506 | 44.7352 | 19.3532 | 27.2418 | 40.7454 | 229.396 | | 1.5798 | 4.0 | 2000 | 1.5403 | 45.0415 | 19.5019 | 27.2969 | 40.951 | 231.044 | | 1.5592 | 5.0 | 2500 | 1.5403 | 45.0851 | 19.5488 | 27.391 | 41.112 | 231.608 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.11.6
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
34
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4701 - Wer: 0.4537 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.5672 | 4.0 | 500 | 1.6669 | 1.0323 | | 0.6226 | 8.0 | 1000 | 0.4701 | 0.4537 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
133
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-islamic-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-islamic-squad This model is a fine-tuned version of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.3855 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 40 | 0.4082 | | No log | 2.0 | 80 | 0.3855 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
CAMeL-Lab/bert-base-arabic-camelbert-msa
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2,967
null
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - Someshfengde/autonlp-data-kaggledays co2_eq_emissions: 28.622267513847273 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 625717992 - CO2 Emissions (in grams): 28.622267513847273 ## Validation Metrics - Loss: 0.8782362937927246 - Accuracy: 0.6022282660559214 - Macro F1: 0.6024258279848015 - Micro F1: 0.6022282660559214 - Weighted F1: 0.6024299908624371 - Macro Precision: 0.604093172183357 - Micro Precision: 0.6022282660559214 - Weighted Precision: 0.6041166306778806 - Macro Recall: 0.6022424576798522 - Micro Recall: 0.6022282660559214 - Weighted Recall: 0.6022282660559214 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Someshfengde/autonlp-kaggledays-625717992 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Someshfengde/autonlp-kaggledays-625717992", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Someshfengde/autonlp-kaggledays-625717992", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
CLAck/en-vi
[ "pytorch", "marian", "text2text-generation", "en", "vi", "dataset:ALT", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: mit tags: - generated_from_trainer model-index: - name: predict-perception-bert-blame-victim results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # predict-perception-bert-blame-victim This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5075 - Rmse: 0.4599 - Rmse Blame::a La vittima: 0.4599 - Mae: 0.3607 - Mae Blame::a La vittima: 0.3607 - R2: -0.1848 - R2 Blame::a La vittima: -0.1848 - Cos: 0.2174 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.2924 - Rsa: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 20 - eval_batch_size: 8 - seed: 1996 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a La vittima | Mae | Mae Blame::a La vittima | R2 | R2 Blame::a La vittima | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------:|:------:|:-----------------------:|:-------:|:----------------------:|:-------:|:----:|:----:|:---------:|:---:| | 1.0264 | 1.0 | 15 | 0.4334 | 0.4250 | 0.4250 | 0.3666 | 0.3666 | -0.0119 | -0.0119 | 0.1304 | 0.0 | 0.5 | 0.2703 | nan | | 0.9814 | 2.0 | 30 | 0.4505 | 0.4333 | 0.4333 | 0.3744 | 0.3744 | -0.0517 | -0.0517 | 0.2174 | 0.0 | 0.5 | 0.2751 | nan | | 0.9283 | 3.0 | 45 | 0.4349 | 0.4257 | 0.4257 | 0.3627 | 0.3627 | -0.0152 | -0.0152 | 0.1304 | 0.0 | 0.5 | 0.2779 | nan | | 0.8904 | 4.0 | 60 | 0.4662 | 0.4408 | 0.4408 | 0.3773 | 0.3773 | -0.0884 | -0.0884 | -0.0435 | 0.0 | 0.5 | 0.2681 | nan | | 0.836 | 5.0 | 75 | 0.4188 | 0.4177 | 0.4177 | 0.3609 | 0.3609 | 0.0223 | 0.0223 | 0.2174 | 0.0 | 0.5 | 0.3051 | nan | | 0.8293 | 6.0 | 90 | 0.4142 | 0.4155 | 0.4155 | 0.3512 | 0.3512 | 0.0330 | 0.0330 | 0.2174 | 0.0 | 0.5 | 0.3220 | nan | | 0.7629 | 7.0 | 105 | 0.3837 | 0.3999 | 0.3999 | 0.3387 | 0.3387 | 0.1041 | 0.1041 | 0.2174 | 0.0 | 0.5 | 0.3051 | nan | | 0.7266 | 8.0 | 120 | 0.3664 | 0.3907 | 0.3907 | 0.3250 | 0.3250 | 0.1446 | 0.1446 | 0.3043 | 0.0 | 0.5 | 0.3409 | nan | | 0.6121 | 9.0 | 135 | 0.3718 | 0.3936 | 0.3936 | 0.3312 | 0.3312 | 0.1320 | 0.1320 | 0.3043 | 0.0 | 0.5 | 0.3983 | nan | | 0.5694 | 10.0 | 150 | 0.3679 | 0.3915 | 0.3915 | 0.3197 | 0.3197 | 0.1411 | 0.1411 | 0.3913 | 0.0 | 0.5 | 0.3518 | nan | | 0.4647 | 11.0 | 165 | 0.3868 | 0.4015 | 0.4015 | 0.3340 | 0.3340 | 0.0970 | 0.0970 | 0.2174 | 0.0 | 0.5 | 0.3285 | nan | | 0.4212 | 12.0 | 180 | 0.3717 | 0.3936 | 0.3936 | 0.3188 | 0.3188 | 0.1322 | 0.1322 | 0.3913 | 0.0 | 0.5 | 0.3518 | nan | | 0.3605 | 13.0 | 195 | 0.3437 | 0.3784 | 0.3784 | 0.3066 | 0.3066 | 0.1976 | 0.1976 | 0.3043 | 0.0 | 0.5 | 0.3423 | nan | | 0.2759 | 14.0 | 210 | 0.3892 | 0.4027 | 0.4027 | 0.3230 | 0.3230 | 0.0914 | 0.0914 | 0.3913 | 0.0 | 0.5 | 0.3518 | nan | | 0.2868 | 15.0 | 225 | 0.3720 | 0.3937 | 0.3937 | 0.3218 | 0.3218 | 0.1315 | 0.1315 | 0.3913 | 0.0 | 0.5 | 0.3440 | nan | | 0.2467 | 16.0 | 240 | 0.3881 | 0.4022 | 0.4022 | 0.3291 | 0.3291 | 0.0939 | 0.0939 | 0.3043 | 0.0 | 0.5 | 0.3363 | nan | | 0.2013 | 17.0 | 255 | 0.4121 | 0.4144 | 0.4144 | 0.3373 | 0.3373 | 0.0380 | 0.0380 | 0.3043 | 0.0 | 0.5 | 0.3363 | nan | | 0.1966 | 18.0 | 270 | 0.4808 | 0.4476 | 0.4476 | 0.3506 | 0.3506 | -0.1224 | -0.1224 | 0.3913 | 0.0 | 0.5 | 0.3214 | nan | | 0.177 | 19.0 | 285 | 0.4263 | 0.4215 | 0.4215 | 0.3398 | 0.3398 | 0.0046 | 0.0046 | 0.2174 | 0.0 | 0.5 | 0.2924 | nan | | 0.1589 | 20.0 | 300 | 0.4274 | 0.4220 | 0.4220 | 0.3363 | 0.3363 | 0.0022 | 0.0022 | 0.2174 | 0.0 | 0.5 | 0.2924 | nan | | 0.1488 | 21.0 | 315 | 0.4548 | 0.4353 | 0.4353 | 0.3431 | 0.3431 | -0.0618 | -0.0618 | 0.3043 | 0.0 | 0.5 | 0.2924 | nan | | 0.1428 | 22.0 | 330 | 0.4405 | 0.4285 | 0.4285 | 0.3417 | 0.3417 | -0.0285 | -0.0285 | 0.3043 | 0.0 | 0.5 | 0.3363 | nan | | 0.1294 | 23.0 | 345 | 0.4955 | 0.4544 | 0.4544 | 0.3565 | 0.3565 | -0.1568 | -0.1568 | 0.3913 | 0.0 | 0.5 | 0.3440 | nan | | 0.1291 | 24.0 | 360 | 0.4861 | 0.4501 | 0.4501 | 0.3529 | 0.3529 | -0.1348 | -0.1348 | 0.2174 | 0.0 | 0.5 | 0.2924 | nan | | 0.1187 | 25.0 | 375 | 0.4752 | 0.4450 | 0.4450 | 0.3518 | 0.3518 | -0.1095 | -0.1095 | 0.2174 | 0.0 | 0.5 | 0.2924 | nan | | 0.1141 | 26.0 | 390 | 0.5131 | 0.4624 | 0.4624 | 0.3598 | 0.3598 | -0.1978 | -0.1978 | 0.3043 | 0.0 | 0.5 | 0.2924 | nan | | 0.1094 | 27.0 | 405 | 0.4863 | 0.4502 | 0.4502 | 0.3547 | 0.3547 | -0.1353 | -0.1353 | 0.2174 | 0.0 | 0.5 | 0.2924 | nan | | 0.0925 | 28.0 | 420 | 0.4900 | 0.4519 | 0.4519 | 0.3564 | 0.3564 | -0.1439 | -0.1439 | 0.2174 | 0.0 | 0.5 | 0.2924 | nan | | 0.108 | 29.0 | 435 | 0.5019 | 0.4573 | 0.4573 | 0.3590 | 0.3590 | -0.1719 | -0.1719 | 0.2174 | 0.0 | 0.5 | 0.2924 | nan | | 0.1054 | 30.0 | 450 | 0.5075 | 0.4599 | 0.4599 | 0.3607 | 0.3607 | -0.1848 | -0.1848 | 0.2174 | 0.0 | 0.5 | 0.2924 | nan | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0