Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
{}
anirudh21/bert-base-uncased-finetuned-mnli
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-mrpc This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6645 - Accuracy: 0.7917 - F1: 0.8590 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 63 | 0.5387 | 0.7402 | 0.8349 | | No log | 2.0 | 126 | 0.5770 | 0.7696 | 0.8513 | | No log | 3.0 | 189 | 0.5357 | 0.7574 | 0.8223 | | No log | 4.0 | 252 | 0.6645 | 0.7917 | 0.8590 | | No log | 5.0 | 315 | 0.6977 | 0.7721 | 0.8426 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "bert-base-uncased-finetuned-mrpc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.7916666666666666, "name": "Accuracy"}, {"type": "f1", "value": 0.8590381426202321, "name": "F1"}]}]}]}
anirudh21/bert-base-uncased-finetuned-mrpc
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-qnli This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6268 - Accuracy: 0.7917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 63 | 0.5339 | 0.7620 | | No log | 2.0 | 126 | 0.4728 | 0.7866 | | No log | 3.0 | 189 | 0.5386 | 0.7847 | | No log | 4.0 | 252 | 0.6096 | 0.7904 | | No log | 5.0 | 315 | 0.6268 | 0.7917 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "bert-base-uncased-finetuned-qnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "qnli"}, "metrics": [{"type": "accuracy", "value": 0.791689547867472, "name": "Accuracy"}]}]}]}
anirudh21/bert-base-uncased-finetuned-qnli
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anirudh21/bert-base-uncased-finetuned-qqp
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-rte This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8075 - Accuracy: 0.6643 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 63 | 0.6777 | 0.5668 | | No log | 2.0 | 126 | 0.6723 | 0.6282 | | No log | 3.0 | 189 | 0.7238 | 0.6318 | | No log | 4.0 | 252 | 0.7993 | 0.6354 | | No log | 5.0 | 315 | 0.8075 | 0.6643 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "bert-base-uncased-finetuned-rte", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "rte"}, "metrics": [{"type": "accuracy", "value": 0.6642599277978339, "name": "Accuracy"}]}]}]}
anirudh21/bert-base-uncased-finetuned-rte
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anirudh21/bert-base-uncased-finetuned-sst2
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-wnli This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6854 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6854 | 0.5634 | | No log | 2.0 | 80 | 0.6983 | 0.3239 | | No log | 3.0 | 120 | 0.6995 | 0.5352 | | No log | 4.0 | 160 | 0.6986 | 0.5634 | | No log | 5.0 | 200 | 0.6996 | 0.5634 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "bert-base-uncased-finetuned-wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5633802816901409, "name": "Accuracy"}]}]}]}
anirudh21/bert-base-uncased-finetuned-wnli
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8623 - Matthews Correlation: 0.5224 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5278 | 1.0 | 535 | 0.5223 | 0.4007 | | 0.3515 | 2.0 | 1070 | 0.5150 | 0.4993 | | 0.2391 | 3.0 | 1605 | 0.6471 | 0.5103 | | 0.1841 | 4.0 | 2140 | 0.7640 | 0.5153 | | 0.1312 | 5.0 | 2675 | 0.8623 | 0.5224 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5224154837835395, "name": "Matthews Correlation"}]}]}]}
anirudh21/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3830 - Accuracy: 0.8456 - F1: 0.8959 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 230 | 0.3826 | 0.8186 | 0.8683 | | No log | 2.0 | 460 | 0.3830 | 0.8456 | 0.8959 | | 0.4408 | 3.0 | 690 | 0.3835 | 0.8382 | 0.8866 | | 0.4408 | 4.0 | 920 | 0.5036 | 0.8431 | 0.8919 | | 0.1941 | 5.0 | 1150 | 0.5783 | 0.8431 | 0.8930 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-mrpc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.8455882352941176, "name": "Accuracy"}, {"type": "f1", "value": 0.8958677685950412, "name": "F1"}]}]}]}
anirudh21/distilbert-base-uncased-finetuned-mrpc
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-qnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8121 - Accuracy: 0.6065 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 156 | 0.6949 | 0.4874 | | No log | 2.0 | 312 | 0.6596 | 0.5957 | | No log | 3.0 | 468 | 0.7186 | 0.5812 | | 0.6026 | 4.0 | 624 | 0.7727 | 0.6029 | | 0.6026 | 5.0 | 780 | 0.8121 | 0.6065 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-qnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "rte"}, "metrics": [{"type": "accuracy", "value": 0.6064981949458483, "name": "Accuracy"}]}]}]}
anirudh21/distilbert-base-uncased-finetuned-qnli
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-rte This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6661 - Accuracy: 0.6173 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 156 | 0.6921 | 0.5162 | | No log | 2.0 | 312 | 0.6661 | 0.6173 | | No log | 3.0 | 468 | 0.7794 | 0.5632 | | 0.5903 | 4.0 | 624 | 0.8832 | 0.5921 | | 0.5903 | 5.0 | 780 | 0.9376 | 0.5921 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-rte", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "rte"}, "metrics": [{"type": "accuracy", "value": 0.6173285198555957, "name": "Accuracy"}]}]}]}
anirudh21/distilbert-base-uncased-finetuned-rte
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4028 - Accuracy: 0.9083 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.188 | 1.0 | 4210 | 0.3127 | 0.9037 | | 0.1299 | 2.0 | 8420 | 0.3887 | 0.9048 | | 0.0845 | 3.0 | 12630 | 0.4028 | 0.9083 | | 0.0691 | 4.0 | 16840 | 0.3924 | 0.9071 | | 0.052 | 5.0 | 21050 | 0.5047 | 0.9002 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-sst2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "sst2"}, "metrics": [{"type": "accuracy", "value": 0.908256880733945, "name": "Accuracy"}]}]}]}
anirudh21/distilbert-base-uncased-finetuned-sst2
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-wnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6883 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6883 | 0.5634 | | No log | 2.0 | 80 | 0.6934 | 0.5634 | | No log | 3.0 | 120 | 0.6960 | 0.5211 | | No log | 4.0 | 160 | 0.6958 | 0.5634 | | No log | 5.0 | 200 | 0.6964 | 0.5634 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5633802816901409, "name": "Accuracy"}]}]}]}
anirudh21/distilbert-base-uncased-finetuned-wnli
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anirudh21/electra-base-discriminator-finetuned-cola
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anirudh21/electra-base-discriminator-finetuned-mnli-mm
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anirudh21/electra-base-discriminator-finetuned-mnli
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anirudh21/electra-base-discriminator-finetuned-mrpc
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anirudh21/electra-base-discriminator-finetuned-qnli
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anirudh21/electra-base-discriminator-finetuned-qqp
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-base-discriminator-finetuned-rte This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4793 - Accuracy: 0.8231 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 156 | 0.6076 | 0.6570 | | No log | 2.0 | 312 | 0.4824 | 0.7762 | | No log | 3.0 | 468 | 0.4793 | 0.8231 | | 0.4411 | 4.0 | 624 | 0.7056 | 0.7906 | | 0.4411 | 5.0 | 780 | 0.6849 | 0.8159 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "electra-base-discriminator-finetuned-rte", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "rte"}, "metrics": [{"type": "accuracy", "value": 0.8231046931407943, "name": "Accuracy"}]}]}]}
anirudh21/electra-base-discriminator-finetuned-rte
null
[ "transformers", "pytorch", "tensorboard", "electra", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anirudh21/electra-base-discriminator-finetuned-sst2
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anirudh21/electra-base-discriminator-finetuned-stsb
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-base-discriminator-finetuned-wnli This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6893 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6893 | 0.5634 | | No log | 2.0 | 80 | 0.7042 | 0.4225 | | No log | 3.0 | 120 | 0.7008 | 0.3803 | | No log | 4.0 | 160 | 0.6998 | 0.5634 | | No log | 5.0 | 200 | 0.7016 | 0.5352 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "electra-base-discriminator-finetuned-wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5633802816901409, "name": "Accuracy"}]}]}]}
anirudh21/electra-base-discriminator-finetuned-wnli
null
[ "transformers", "pytorch", "tensorboard", "electra", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anirudh21/roberta-base-finetuned-wnli
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anirudh21/xlnet-base-cased-finetuned-cola
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-base-cased-finetuned-rte This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.0656 - Accuracy: 0.6895 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 156 | 0.7007 | 0.4874 | | No log | 2.0 | 312 | 0.6289 | 0.6751 | | No log | 3.0 | 468 | 0.7020 | 0.6606 | | 0.6146 | 4.0 | 624 | 1.0573 | 0.6570 | | 0.6146 | 5.0 | 780 | 1.0656 | 0.6895 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "xlnet-base-cased-finetuned-rte", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "rte"}, "metrics": [{"type": "accuracy", "value": 0.6895306859205776, "name": "Accuracy"}]}]}]}
anirudh21/xlnet-base-cased-finetuned-rte
null
[ "transformers", "pytorch", "tensorboard", "xlnet", "text-classification", "generated_from_trainer", "dataset:glue", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-base-cased-finetuned-wnli This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6874 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.7209 | 0.5352 | | No log | 2.0 | 80 | 0.6874 | 0.5634 | | No log | 3.0 | 120 | 0.6908 | 0.5634 | | No log | 4.0 | 160 | 0.6987 | 0.4930 | | No log | 5.0 | 200 | 0.6952 | 0.5634 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "xlnet-base-cased-finetuned-wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5633802816901409, "name": "Accuracy"}]}]}]}
anirudh21/xlnet-base-cased-finetuned-wnli
null
[ "transformers", "pytorch", "tensorboard", "xlnet", "text-classification", "generated_from_trainer", "dataset:glue", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
anirudhovn/roberta-base-emotion
null
[ "transformers", "tf", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anirudhovn/roberta-base
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anisa/anisadwi
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anisha2102/docvqa
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anixane27/helloWorld
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anj/model1
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anjaaksenova/rugpt3small_based_on_gpt2-finetuned-cola
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wavlm-base-english This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on the english_ASR - CLEAN dataset. It achieves the following results on the evaluation set: - Loss: 0.0955 - Wer: 0.0773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.8664 | 0.17 | 300 | 2.8439 | 1.0 | | 0.5009 | 0.34 | 600 | 0.2709 | 0.2162 | | 0.2056 | 0.5 | 900 | 0.1934 | 0.1602 | | 0.1648 | 0.67 | 1200 | 0.1576 | 0.1306 | | 0.1922 | 0.84 | 1500 | 0.1358 | 0.1114 | | 0.093 | 1.01 | 1800 | 0.1277 | 0.1035 | | 0.0652 | 1.18 | 2100 | 0.1251 | 0.1005 | | 0.0848 | 1.35 | 2400 | 0.1188 | 0.0964 | | 0.0706 | 1.51 | 2700 | 0.1091 | 0.0905 | | 0.0846 | 1.68 | 3000 | 0.1018 | 0.0840 | | 0.0684 | 1.85 | 3300 | 0.0978 | 0.0809 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 1.18.0 - Tokenizers 0.10.3
{"tags": ["automatic-speech-recognition", "english_asr", "generated_from_trainer"], "model-index": [{"name": "wavlm-base-english", "results": []}]}
anjulRajendraSharma/WavLm-base-en
null
[ "transformers", "pytorch", "tensorboard", "wavlm", "automatic-speech-recognition", "english_asr", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
{}
anjulRajendraSharma/wav2vec2-indian-english
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wavlm-libri-clean-100h-base This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on the LIBRISPEECH_ASR - CLEAN dataset. It achieves the following results on the evaluation set: - Loss: 0.0955 - Wer: 0.0773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.8664 | 0.17 | 300 | 2.8439 | 1.0 | | 0.5009 | 0.34 | 600 | 0.2709 | 0.2162 | | 0.2056 | 0.5 | 900 | 0.1934 | 0.1602 | | 0.1648 | 0.67 | 1200 | 0.1576 | 0.1306 | | 0.1922 | 0.84 | 1500 | 0.1358 | 0.1114 | | 0.093 | 1.01 | 1800 | 0.1277 | 0.1035 | | 0.0652 | 1.18 | 2100 | 0.1251 | 0.1005 | | 0.0848 | 1.35 | 2400 | 0.1188 | 0.0964 | | 0.0706 | 1.51 | 2700 | 0.1091 | 0.0905 | | 0.0846 | 1.68 | 3000 | 0.1018 | 0.0840 | | 0.0684 | 1.85 | 3300 | 0.0978 | 0.0809 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 1.18.0 - Tokenizers 0.10.3
{"tags": ["automatic-speech-recognition", "librispeech_asr", "generated_from_trainer"], "model-index": [{"name": "wavlm-libri-clean-100h-base", "results": []}]}
anjulRajendraSharma/wavlm-base-libri-clean-100
null
[ "transformers", "pytorch", "tensorboard", "wavlm", "automatic-speech-recognition", "librispeech_asr", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anker/Test
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ankit0208/chatBot
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ankitbhatnagar/kmeans
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
Model to summarize the meeting transcripts.
{}
ankitkhowal/minutes-of-meeting
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
ankitkupadhyay/dummy-model
null
[ "transformers", "pytorch", "camembert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
ankur310794/bart-base-keyphrase-generation-kpTimes
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
ankur310794/bart-base-keyphrase-extractive-openkp
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# Open Domain Question Answering A core goal in artificial intelligence is to build systems that can read the web, and then answer complex questions about any topic. These question-answering (QA) systems could have a big impact on the way that we access information. Furthermore, open-domain question answering is a benchmark task in the development of Artificial Intelligence, since understanding text and being able to answer questions about it is something that we generally associate with intelligence. # The Natural Questions Dataset To help spur development in open-domain question answering, we have created the Natural Questions (NQ) corpus, along with a challenge website based on this data. The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. The inclusion of real user questions, and the requirement that solutions should read an entire page to find the answer, cause NQ to be a more realistic and challenging task than prior QA datasets.
{"tags": ["small answer"], "datasets": ["natural_questions"]}
ankur310794/bert-large-uncased-nq-small-answer
null
[ "transformers", "tf", "bert", "question-answering", "small answer", "dataset:natural_questions", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# My Awesome Model
{"tags": ["conversational"]}
ann101020/le2sbot-hp
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
A POS-tagger for Old Church Slavonic trained on the Old Church Slavonic UD treebank (https://github.com/UniversalDependencies/UD_Old_Church_Slavonic-PROIEL). GitHub with api: https://github.com/annadmitrieva/chu-api
{"language": ["chu"], "license": "mit", "tags": ["Old Church Slavonic", "POS-tagging"], "widget": [{"text": "\u041d\u0435 \u043e\u0441\u046b\u0436\u0434\u0430\u0438\u0442\u0435 \u0434\u0430 \u043d\u0435 \u043e\u0441\u046b\u0436\u0434\u0435\u043d\u0438 \u0431\u046b\u0434\u0435\u0442\u0435"}]}
annadmitrieva/old-church-slavonic-pos
null
[ "transformers", "pytorch", "safetensors", "distilbert", "token-classification", "Old Church Slavonic", "POS-tagging", "chu", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-addresso This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.12.5 - Pytorch 1.8.1 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-uncased-finetuned-addresso", "results": []}]}
annafavaro/bert-base-uncased-finetuned-addresso
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
annafavaro/bert-large-uncased-finetuned-addresso
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
annafavaro/distilbert-base-uncased-finetuned-Addresso
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
annafavaro/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
ktrain predictor for NER of ADR in patient forum discussions. Created in ktrain 0.29 with transformers 4.10. See requirements.txt to run model.
{}
annedirkson/ADR_extraction_patient_forum
null
[ "tf", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{}
annedirkson/BERT_embeddings_ADR_normalization
null
[ "transformers", "pytorch", "tf", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
annexcls/Roan
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
annieptba/causallm-annie-hw8
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
annieptba/causallm-annie
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
annieptba/electra-maskedlm-annie
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
annieptba/maskedlm-annie-hw8
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
annieptba/maskedlm-annie
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
annythepancake/lllll
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anon/apibart
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
anon-submission-mk/bert-base-macedonian-bulgarian-cased
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
anon-submission-mk/bert-base-macedonian-cased
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
anon-submission-mk/distilbert-base-macedonian-cased
null
[ "transformers", "pytorch", "tf", "jax", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
anon-submission-mk/electra-base-macedonian-bulgarian-cased-discriminator
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
anon-submission-mk/electra-base-macedonian-cased-discriminator
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
anon-submission-mk/electra-base-macedonian-cased-generator
null
[ "transformers", "pytorch", "electra", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
anondo/test_anon
null
[ "transformers", "pytorch", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# German GPT-2 model **Note**: This model was de-anonymized and now lives at: https://huggingface.co/dbmdz/german-gpt2 Please use the new model name instead!
{"language": "de", "license": "mit", "widget": [{"text": "Heute ist sehr sch\u00f6nes Wetter in"}]}
anonymous-german-nlp/german-gpt2
null
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anowakowski/poleval2021-qe-blind
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anowakowski/poleval2021-qe-nonblind
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# Disclaimer: This page is under maintenance. Please DO NOT refer to the information on this page to make any decision yet. # Vaccinating COVID tweets A fine-tuned model for fact-classification task on English tweets about COVID-19/vaccine. ## Intended uses & limitations You can classify if the input tweet (or any others statement) about COVID-19/vaccine is `true`, `false` or `misleading`. Note that since this model was trained with data up to May 2020, the most recent information may not be reflected. #### How to use You can use this model directly on this page or using `transformers` in python. - Load pipeline and implement with input sequence ```python from transformers import pipeline pipe = pipeline("sentiment-analysis", model = "ans/vaccinating-covid-tweets") seq = "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic." pipe(seq) ``` - Expected output ```python [ { "label": "false", "score": 0.07972867041826248 }, { "label": "misleading", "score": 0.019911376759409904 }, { "label": "true", "score": 0.9003599882125854 } ] ``` - `true` examples ```python "By the end of 2020, several vaccines had become available for use in different parts of the world." "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic." "RNA vaccines were the first vaccines for SARS-CoV-2 to be produced and represent an entirely new vaccine approach." ``` - `false` examples ```python "COVID-19 vaccine caused new strain in UK." ``` #### Limitations and bias To conservatively classify whether an input sequence is true or not, the model may have predictions biased toward `false` or `misleading`. ## Training data & Procedure #### Pre-trained baseline model - Pre-trained model: [BERTweet](https://github.com/VinAIResearch/BERTweet) - trained based on the RoBERTa pre-training procedure - 850M General English Tweets (Jan 2012 to Aug 2019) - 23M COVID-19 English Tweets - Size of the model: >134M parameters - Further training - Pre-training with recent COVID-19/vaccine tweets and fine-tuning for fact classification #### 1) Pre-training language model - The model was pre-trained on COVID-19/vaccined related tweets using a masked language modeling (MLM) objective starting from BERTweet. - Following datasets on English tweets were used: - Tweets with trending #CovidVaccine hashtag, 207,000 tweets uploaded across Aug 2020 to Apr 2021 ([kaggle](https://www.kaggle.com/kaushiksuresh147/covidvaccine-tweets)) - Tweets about all COVID-19 vaccines, 78,000 tweets uploaded across Dec 2020 to May 2021 ([kaggle](https://www.kaggle.com/gpreda/all-covid19-vaccines-tweets)) - COVID-19 Twitter chatter dataset, 590,000 tweets uploaded across Mar 2021 to May 2021 ([github](https://github.com/thepanacealab/covid19_twitter)) #### 2) Fine-tuning for fact classification - A fine-tuned model from pre-trained language model (1) for fact-classification task on COVID-19/vaccine. - COVID-19/vaccine-related statements were collected from [Poynter](https://www.poynter.org/ifcn-covid-19-misinformation/) and [Snopes](https://www.snopes.com/) using Selenium resulting in over 14,000 fact-checked statements from Jan 2020 to May 2021. - Original labels were divided within following three categories: - `False`: includes false, no evidence, manipulated, fake, not true, unproven and unverified - `Misleading`: includes misleading, exaggerated, out of context and needs context - `True`: includes true and correct ## Evaluation results | Training loss | Validation loss | Training accuracy | Validation accuracy | | --- | --- | --- | --- | | 0.1062 | 0.1006 | 96.3% | 94.5% | # Contributors - This model is a part of final team project from MLDL for DS class at SNU. - Team BIBI - Vaccinating COVID-NineTweets - Team members: Ahn, Hyunju; An, Jiyong; An, Seungchan; Jeong, Seokho; Kim, Jungmin; Kim, Sangbeom - Advisor: Prof. Wen-Syan Li <a href="https://gsds.snu.ac.kr/"><img src="https://gsds.snu.ac.kr/wp-content/uploads/sites/50/2021/04/GSDS_logo2-e1619068952717.png" width="200" height="80"></a>
{"language": "en", "license": "apache-2.0", "datasets": ["tweets"], "widget": [{"text": "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic."}]}
ans/vaccinating-covid-tweets
null
[ "transformers", "pytorch", "roberta", "text-classification", "en", "dataset:tweets", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anselmo0v/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ansfarooq7/l4project
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anshJain100/My-talking-bot
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{"tags": ["conversational"]}
anshengli2/DialogGPT-small-Bot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anshulagg/distilbert-base-uncased-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
This repository doesn't contain a model, but only a tokenizer that can be used with the `tokenizers` library. This tokenizer is just a copy of `bert-base-uncased`. ```python from tokenizers import Tokenizer tokenizer = Tokenizer.from_pretrained("anthony/tokenizers-test") ```
{}
anthony/tokenizers-test
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
anthonymirand/haha_2019_adaptation_task
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
anthonymirand/haha_2019_primary_task
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Belgian GPT-2 🇧🇪 **A GPT-2 model pre-trained on a very large and heterogeneous French corpus (~60Gb).** ## Usage You can use BelGPT-2 with [🤗 transformers](https://github.com/huggingface/transformers): ```python import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel # Load pretrained model and tokenizer model = GPT2LMHeadModel.from_pretrained("antoiloui/belgpt2") tokenizer = GPT2Tokenizer.from_pretrained("antoiloui/belgpt2") # Generate a sample of text model.eval() output = model.generate( bos_token_id=random.randint(1,50000), do_sample=True, top_k=50, max_length=100, top_p=0.95, num_return_sequences=1 ) # Decode it decoded_output = [] for sample in output: decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True)) print(decoded_output) ``` ## Data Below is the list of all French copora used to pre-trained the model: | Dataset | `$corpus_name` | Raw size | Cleaned size | | :------| :--- | :---: | :---: | | CommonCrawl | `common_crawl` | 200.2 GB | 40.4 GB | | NewsCrawl | `news_crawl` | 10.4 GB | 9.8 GB | | Wikipedia | `wiki` | 19.4 GB | 4.1 GB | | Wikisource | `wikisource` | 4.6 GB | 2.3 GB | | Project Gutenberg | `gutenberg` | 1.3 GB | 1.1 GB | | EuroParl | `europarl` | 289.9 MB | 278.7 MB | | NewsCommentary | `news_commentary` | 61.4 MB | 58.1 MB | | **Total** | | **236.3 GB** | **57.9 GB** | ## Documentation Detailed documentation on the pre-trained model, its implementation, and the data can be found [here](https://github.com/antoiloui/belgpt2/blob/master/docs/index.md). ## Citation For attribution in academic contexts, please cite this work as: ``` @misc{louis2020belgpt2, author = {Louis, Antoine}, title = {{BelGPT-2: a GPT-2 model pre-trained on French corpora.}}, year = {2020}, howpublished = {\url{https://github.com/antoiloui/belgpt2}}, } ```
{"language": ["fr"], "license": ["mit"], "widget": [{"text": "Hier, Elon Musk a"}, {"text": "Pourquoi a-t-il"}, {"text": "Tout \u00e0 coup, elle"}]}
antoinelouis/belgpt2
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "gpt2", "text-generation", "fr", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# NetBERT 📶 <img align="left" src="illustration.jpg" width="150"/> <br><br><br> &nbsp;&nbsp;&nbsp;NetBERT is a [BERT-base](https://huggingface.co/bert-base-cased) model further pre-trained on a huge corpus of computer networking text (~23Gb). <br><br> ## Usage You can use the raw model for masked language modeling (MLM), but it's mostly intended to be fine-tuned on a downstream task, especially one that uses the whole sentence to make decisions such as text classification, extractive question answering, or semantic search. You can use this model directly with a pipeline for [masked language modeling](https://huggingface.co/tasks/fill-mask): ```python from transformers import pipeline unmasker = pipeline('fill-mask', model='antoinelouis/netbert') unmasker("The nodes of a computer network may include [MASK].") ``` You can also use this model to [extract the features](https://huggingface.co/tasks/feature-extraction) of a given text: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('antoinelouis/netbert') model = AutoModel.from_pretrained('antoinelouis/netbert') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Documentation Detailed documentation on the pre-trained model, its implementation, and the data can be found on [Github](https://github.com/antoiloui/netbert/blob/master/docs/index.md). ## Citation For attribution in academic contexts, please cite this work as: ``` @mastersthesis{louis2020netbert, title={NetBERT: A Pre-trained Language Representation Model for Computer Networking}, author={Louis, Antoine}, year={2020}, school={University of Liege} } ```
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "widget": [{"text": "The nodes of a computer network may include [MASK]."}]}
antoinelouis/netbert
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
antoinegk/distilbert-base-uncased-finetuned-ner
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
antoinem/workspace
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-ft-common-language This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the common_language dataset. It achieves the following results on the evaluation set: - Loss: 2.7214 - Accuracy: 0.2797 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 4 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.6543 | 1.0 | 173 | 3.7611 | 0.0491 | | 3.2221 | 2.0 | 346 | 3.4868 | 0.1352 | | 2.9332 | 3.0 | 519 | 3.2732 | 0.1861 | | 2.7299 | 4.0 | 692 | 3.0944 | 0.2172 | | 2.5638 | 5.0 | 865 | 2.9790 | 0.2400 | | 2.3871 | 6.0 | 1038 | 2.8668 | 0.2590 | | 2.3384 | 7.0 | 1211 | 2.7972 | 0.2653 | | 2.2648 | 8.0 | 1384 | 2.7625 | 0.2695 | | 2.2162 | 9.0 | 1557 | 2.7405 | 0.2782 | | 2.1915 | 10.0 | 1730 | 2.7214 | 0.2797 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "datasets": ["common_language"], "metrics": ["accuracy"], "model-index": [{"name": "distilhubert-ft-common-language", "results": []}]}
anton-l/distilhubert-ft-common-language
null
[ "transformers", "pytorch", "tensorboard", "hubert", "audio-classification", "generated_from_trainer", "dataset:common_language", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-ft-keyword-spotting This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.1163 - Accuracy: 0.9706 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 256 - eval_batch_size: 32 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8176 | 1.0 | 200 | 0.7718 | 0.8116 | | 0.2364 | 2.0 | 400 | 0.2107 | 0.9662 | | 0.1198 | 3.0 | 600 | 0.1374 | 0.9678 | | 0.0891 | 4.0 | 800 | 0.1163 | 0.9706 | | 0.085 | 5.0 | 1000 | 0.1180 | 0.9690 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "datasets": ["superb"], "metrics": ["accuracy"], "model-index": [{"name": "distilhubert-ft-keyword-spotting", "results": []}]}
anton-l/distilhubert-ft-keyword-spotting
null
[ "transformers", "pytorch", "tensorboard", "hubert", "audio-classification", "generated_from_trainer", "dataset:superb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
anton-l/gpt-j-tiny-random
null
[ "transformers", "pytorch", "rust", "gptj", "text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hubert-base-ft-keyword-spotting This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0774 - Accuracy: 0.9819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0422 | 1.0 | 399 | 0.8999 | 0.6918 | | 0.3296 | 2.0 | 798 | 0.1505 | 0.9778 | | 0.2088 | 3.0 | 1197 | 0.0901 | 0.9816 | | 0.202 | 4.0 | 1596 | 0.0848 | 0.9813 | | 0.1535 | 5.0 | 1995 | 0.0774 | 0.9819 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "datasets": ["superb"], "metrics": ["accuracy"], "model-index": [{"name": "hubert-base-ft-keyword-spotting", "results": []}]}
anton-l/hubert-base-ft-keyword-spotting
null
[ "transformers", "pytorch", "tensorboard", "hubert", "audio-classification", "generated_from_trainer", "dataset:superb", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
anton-l/megatron-11b
null
[ "transformers", "pytorch", "megatron", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
anton-l/sew-common_voice-tr-demo
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
audio-classification
transformers
{}
anton-l/sew-d-mid-400k-ft-keyword-spotting
null
[ "transformers", "pytorch", "tensorboard", "sew-d", "audio-classification", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sew-mid-100k-ft-common-language This model is a fine-tuned version of [asapp/sew-mid-100k](https://huggingface.co/asapp/sew-mid-100k) on the common_language dataset. It achieves the following results on the evaluation set: - Loss: 2.1189 - Accuracy: 0.3842 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 4 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.608 | 1.0 | 173 | 3.7266 | 0.0540 | | 3.1298 | 2.0 | 346 | 3.2180 | 0.1654 | | 2.8481 | 3.0 | 519 | 2.9270 | 0.2019 | | 2.648 | 4.0 | 692 | 2.6991 | 0.2619 | | 2.5 | 5.0 | 865 | 2.5236 | 0.3004 | | 2.2578 | 6.0 | 1038 | 2.4019 | 0.3212 | | 2.2782 | 7.0 | 1211 | 2.1698 | 0.3658 | | 2.1665 | 8.0 | 1384 | 2.1976 | 0.3631 | | 2.1626 | 9.0 | 1557 | 2.1473 | 0.3791 | | 2.1514 | 10.0 | 1730 | 2.1189 | 0.3842 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "datasets": ["common_language"], "metrics": ["accuracy"], "model-index": [{"name": "sew-mid-100k-ft-common-language", "results": []}]}
anton-l/sew-mid-100k-ft-common-language
null
[ "transformers", "pytorch", "tensorboard", "sew", "audio-classification", "generated_from_trainer", "dataset:common_language", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sew-mid-100k-ft-keyword-spotting This model is a fine-tuned version of [asapp/sew-mid-100k](https://huggingface.co/asapp/sew-mid-100k) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0975 - Accuracy: 0.9757 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5999 | 1.0 | 399 | 0.2262 | 0.9635 | | 0.4271 | 2.0 | 798 | 0.1230 | 0.9697 | | 0.3778 | 3.0 | 1197 | 0.1052 | 0.9731 | | 0.3227 | 4.0 | 1596 | 0.0975 | 0.9757 | | 0.3081 | 5.0 | 1995 | 0.0962 | 0.9753 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "datasets": ["superb"], "metrics": ["accuracy"], "model-index": [{"name": "sew-mid-100k-ft-keyword-spotting", "results": []}]}
anton-l/sew-mid-100k-ft-keyword-spotting
null
[ "transformers", "pytorch", "tensorboard", "sew", "audio-classification", "generated_from_trainer", "dataset:superb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
anton-l/wav2vec2-base-960h
null
[ "transformers", "pytorch", "wav2vec2", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-ks This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0952 - Accuracy: 0.9823 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7908 | 1.0 | 399 | 0.6776 | 0.9009 | | 0.3202 | 2.0 | 798 | 0.2061 | 0.9763 | | 0.221 | 3.0 | 1197 | 0.1257 | 0.9785 | | 0.1773 | 4.0 | 1596 | 0.0990 | 0.9813 | | 0.1729 | 5.0 | 1995 | 0.0952 | 0.9823 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["superb"], "metrics": ["accuracy"], "model-index": [{"name": "wav2vec2-base-finetuned-ks", "results": []}]}
anton-l/wav2vec2-base-finetuned-ks
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:superb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
audio-classification
transformers
{}
anton-l/wav2vec2-base-ft-common-language
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-ft-keyword-spotting This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0824 - Accuracy: 0.9826 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8972 | 1.0 | 399 | 0.7023 | 0.8174 | | 0.3274 | 2.0 | 798 | 0.1634 | 0.9773 | | 0.1993 | 3.0 | 1197 | 0.1048 | 0.9788 | | 0.1777 | 4.0 | 1596 | 0.0824 | 0.9826 | | 0.1527 | 5.0 | 1995 | 0.0812 | 0.9810 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "datasets": ["superb"], "metrics": ["accuracy"], "model-index": [{"name": "wav2vec2-base-ft-keyword-spotting", "results": []}]}
anton-l/wav2vec2-base-ft-keyword-spotting
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:superb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-keyword-spotting This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0746 - Accuracy: 0.9843 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8279 | 1.0 | 399 | 0.6792 | 0.8558 | | 0.2961 | 2.0 | 798 | 0.1383 | 0.9798 | | 0.2069 | 3.0 | 1197 | 0.0972 | 0.9809 | | 0.1757 | 4.0 | 1596 | 0.0843 | 0.9825 | | 0.1607 | 5.0 | 1995 | 0.0746 | 0.9843 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "datasets": ["superb"], "metrics": ["accuracy"], "model-index": [{"name": "wav2vec2-base-keyword-spotting", "results": []}]}
anton-l/wav2vec2-base-keyword-spotting
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:superb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-lang-id This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the anton-l/common_language dataset. It achieves the following results on the evaluation set: - Loss: 0.9836 - Accuracy: 0.7945 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 4 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.9568 | 1.0 | 173 | 3.2866 | 0.1146 | | 1.9243 | 2.0 | 346 | 2.1241 | 0.3840 | | 1.2923 | 3.0 | 519 | 1.5498 | 0.5489 | | 0.8659 | 4.0 | 692 | 1.4953 | 0.6126 | | 0.5539 | 5.0 | 865 | 1.2431 | 0.6926 | | 0.4101 | 6.0 | 1038 | 1.1443 | 0.7232 | | 0.2945 | 7.0 | 1211 | 1.0870 | 0.7544 | | 0.1552 | 8.0 | 1384 | 1.1080 | 0.7661 | | 0.0968 | 9.0 | 1557 | 0.9836 | 0.7945 | | 0.0623 | 10.0 | 1730 | 1.0252 | 0.7993 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "datasets": ["common_language"], "metrics": ["accuracy"], "model-index": [{"name": "wav2vec2-base-lang-id", "results": []}]}
anton-l/wav2vec2-base-lang-id
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:common_language", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
anton-l/wav2vec2-base-superb-sd
null
[ "transformers", "pytorch", "wav2vec2", "audio-frame-classification", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00