Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-5
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3078
- Accuracy: 0.6930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6813 | 1.0 | 3 | 0.7842 | 0.25 |
| 0.6617 | 2.0 | 6 | 0.7968 | 0.25 |
| 0.6945 | 3.0 | 9 | 0.7746 | 0.25 |
| 0.5967 | 4.0 | 12 | 0.7557 | 0.25 |
| 0.4824 | 5.0 | 15 | 0.6920 | 0.25 |
| 0.3037 | 6.0 | 18 | 0.6958 | 0.5 |
| 0.2329 | 7.0 | 21 | 0.6736 | 0.5 |
| 0.1441 | 8.0 | 24 | 0.3749 | 1.0 |
| 0.0875 | 9.0 | 27 | 0.3263 | 0.75 |
| 0.0655 | 10.0 | 30 | 0.3525 | 0.75 |
| 0.0373 | 11.0 | 33 | 0.1993 | 1.0 |
| 0.0173 | 12.0 | 36 | 0.1396 | 1.0 |
| 0.0147 | 13.0 | 39 | 0.0655 | 1.0 |
| 0.0084 | 14.0 | 42 | 0.0343 | 1.0 |
| 0.0049 | 15.0 | 45 | 0.0225 | 1.0 |
| 0.004 | 16.0 | 48 | 0.0167 | 1.0 |
| 0.003 | 17.0 | 51 | 0.0134 | 1.0 |
| 0.0027 | 18.0 | 54 | 0.0114 | 1.0 |
| 0.002 | 19.0 | 57 | 0.0104 | 1.0 |
| 0.0015 | 20.0 | 60 | 0.0099 | 1.0 |
| 0.0014 | 21.0 | 63 | 0.0095 | 1.0 |
| 0.0013 | 22.0 | 66 | 0.0095 | 1.0 |
| 0.0012 | 23.0 | 69 | 0.0091 | 1.0 |
| 0.0011 | 24.0 | 72 | 0.0085 | 1.0 |
| 0.0009 | 25.0 | 75 | 0.0081 | 1.0 |
| 0.001 | 26.0 | 78 | 0.0077 | 1.0 |
| 0.0008 | 27.0 | 81 | 0.0074 | 1.0 |
| 0.0009 | 28.0 | 84 | 0.0071 | 1.0 |
| 0.0007 | 29.0 | 87 | 0.0068 | 1.0 |
| 0.0008 | 30.0 | 90 | 0.0064 | 1.0 |
| 0.0007 | 31.0 | 93 | 0.0062 | 1.0 |
| 0.0007 | 32.0 | 96 | 0.0059 | 1.0 |
| 0.0007 | 33.0 | 99 | 0.0056 | 1.0 |
| 0.0005 | 34.0 | 102 | 0.0054 | 1.0 |
| 0.0006 | 35.0 | 105 | 0.0053 | 1.0 |
| 0.0008 | 36.0 | 108 | 0.0051 | 1.0 |
| 0.0007 | 37.0 | 111 | 0.0050 | 1.0 |
| 0.0007 | 38.0 | 114 | 0.0049 | 1.0 |
| 0.0006 | 39.0 | 117 | 0.0048 | 1.0 |
| 0.0005 | 40.0 | 120 | 0.0048 | 1.0 |
| 0.0005 | 41.0 | 123 | 0.0048 | 1.0 |
| 0.0005 | 42.0 | 126 | 0.0047 | 1.0 |
| 0.0005 | 43.0 | 129 | 0.0047 | 1.0 |
| 0.0005 | 44.0 | 132 | 0.0047 | 1.0 |
| 0.0006 | 45.0 | 135 | 0.0047 | 1.0 |
| 0.0005 | 46.0 | 138 | 0.0047 | 1.0 |
| 0.0005 | 47.0 | 141 | 0.0047 | 1.0 |
| 0.0006 | 48.0 | 144 | 0.0047 | 1.0 |
| 0.0005 | 49.0 | 147 | 0.0047 | 1.0 |
| 0.0005 | 50.0 | 150 | 0.0047 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "deberta-v3-large__sst2__train-8-5", "results": []}]} | SetFit/deberta-v3-large__sst2__train-8-5 | null | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-6
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4331
- Accuracy: 0.7106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6486 | 1.0 | 3 | 0.7901 | 0.25 |
| 0.6418 | 2.0 | 6 | 0.9259 | 0.25 |
| 0.6169 | 3.0 | 9 | 1.0574 | 0.25 |
| 0.5639 | 4.0 | 12 | 1.1372 | 0.25 |
| 0.4562 | 5.0 | 15 | 0.6090 | 0.5 |
| 0.3105 | 6.0 | 18 | 0.4435 | 1.0 |
| 0.2303 | 7.0 | 21 | 0.2804 | 1.0 |
| 0.1388 | 8.0 | 24 | 0.2205 | 1.0 |
| 0.0918 | 9.0 | 27 | 0.1282 | 1.0 |
| 0.0447 | 10.0 | 30 | 0.0643 | 1.0 |
| 0.0297 | 11.0 | 33 | 0.0361 | 1.0 |
| 0.0159 | 12.0 | 36 | 0.0211 | 1.0 |
| 0.0102 | 13.0 | 39 | 0.0155 | 1.0 |
| 0.0061 | 14.0 | 42 | 0.0158 | 1.0 |
| 0.0049 | 15.0 | 45 | 0.0189 | 1.0 |
| 0.0035 | 16.0 | 48 | 0.0254 | 1.0 |
| 0.0027 | 17.0 | 51 | 0.0305 | 1.0 |
| 0.0021 | 18.0 | 54 | 0.0287 | 1.0 |
| 0.0016 | 19.0 | 57 | 0.0215 | 1.0 |
| 0.0016 | 20.0 | 60 | 0.0163 | 1.0 |
| 0.0014 | 21.0 | 63 | 0.0138 | 1.0 |
| 0.0015 | 22.0 | 66 | 0.0131 | 1.0 |
| 0.001 | 23.0 | 69 | 0.0132 | 1.0 |
| 0.0014 | 24.0 | 72 | 0.0126 | 1.0 |
| 0.0011 | 25.0 | 75 | 0.0125 | 1.0 |
| 0.001 | 26.0 | 78 | 0.0119 | 1.0 |
| 0.0008 | 27.0 | 81 | 0.0110 | 1.0 |
| 0.0007 | 28.0 | 84 | 0.0106 | 1.0 |
| 0.0008 | 29.0 | 87 | 0.0095 | 1.0 |
| 0.0009 | 30.0 | 90 | 0.0089 | 1.0 |
| 0.0008 | 31.0 | 93 | 0.0083 | 1.0 |
| 0.0007 | 32.0 | 96 | 0.0075 | 1.0 |
| 0.0008 | 33.0 | 99 | 0.0066 | 1.0 |
| 0.0006 | 34.0 | 102 | 0.0059 | 1.0 |
| 0.0007 | 35.0 | 105 | 0.0054 | 1.0 |
| 0.0008 | 36.0 | 108 | 0.0051 | 1.0 |
| 0.0007 | 37.0 | 111 | 0.0049 | 1.0 |
| 0.0007 | 38.0 | 114 | 0.0047 | 1.0 |
| 0.0006 | 39.0 | 117 | 0.0045 | 1.0 |
| 0.0006 | 40.0 | 120 | 0.0046 | 1.0 |
| 0.0005 | 41.0 | 123 | 0.0045 | 1.0 |
| 0.0006 | 42.0 | 126 | 0.0044 | 1.0 |
| 0.0006 | 43.0 | 129 | 0.0043 | 1.0 |
| 0.0006 | 44.0 | 132 | 0.0044 | 1.0 |
| 0.0005 | 45.0 | 135 | 0.0045 | 1.0 |
| 0.0006 | 46.0 | 138 | 0.0043 | 1.0 |
| 0.0006 | 47.0 | 141 | 0.0043 | 1.0 |
| 0.0006 | 48.0 | 144 | 0.0041 | 1.0 |
| 0.0007 | 49.0 | 147 | 0.0042 | 1.0 |
| 0.0005 | 50.0 | 150 | 0.0042 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "deberta-v3-large__sst2__train-8-6", "results": []}]} | SetFit/deberta-v3-large__sst2__train-8-6 | null | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-7
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7037
- Accuracy: 0.5008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6864 | 1.0 | 3 | 0.7800 | 0.25 |
| 0.6483 | 2.0 | 6 | 0.8067 | 0.25 |
| 0.6028 | 3.0 | 9 | 0.8500 | 0.25 |
| 0.4086 | 4.0 | 12 | 1.0661 | 0.25 |
| 0.2923 | 5.0 | 15 | 1.2302 | 0.25 |
| 0.2059 | 6.0 | 18 | 1.0312 | 0.5 |
| 0.1238 | 7.0 | 21 | 1.1271 | 0.5 |
| 0.0711 | 8.0 | 24 | 1.3100 | 0.5 |
| 0.0453 | 9.0 | 27 | 1.4208 | 0.5 |
| 0.0198 | 10.0 | 30 | 1.5988 | 0.5 |
| 0.0135 | 11.0 | 33 | 1.9174 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "deberta-v3-large__sst2__train-8-7", "results": []}]} | SetFit/deberta-v3-large__sst2__train-8-7 | null | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-8
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7414
- Accuracy: 0.5623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6597 | 1.0 | 3 | 0.7716 | 0.25 |
| 0.6376 | 2.0 | 6 | 0.7802 | 0.25 |
| 0.5857 | 3.0 | 9 | 0.6625 | 0.75 |
| 0.4024 | 4.0 | 12 | 0.5195 | 0.75 |
| 0.2635 | 5.0 | 15 | 0.4222 | 1.0 |
| 0.1714 | 6.0 | 18 | 0.4410 | 0.5 |
| 0.1267 | 7.0 | 21 | 0.7773 | 0.75 |
| 0.0582 | 8.0 | 24 | 0.9070 | 0.75 |
| 0.0374 | 9.0 | 27 | 0.9539 | 0.75 |
| 0.0204 | 10.0 | 30 | 1.0507 | 0.75 |
| 0.012 | 11.0 | 33 | 1.2802 | 0.5 |
| 0.0086 | 12.0 | 36 | 1.4272 | 0.5 |
| 0.0049 | 13.0 | 39 | 1.4803 | 0.5 |
| 0.0039 | 14.0 | 42 | 1.4912 | 0.5 |
| 0.0031 | 15.0 | 45 | 1.5231 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "deberta-v3-large__sst2__train-8-8", "results": []}]} | SetFit/deberta-v3-large__sst2__train-8-8 | null | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-9
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6013
- Accuracy: 0.7210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6757 | 1.0 | 3 | 0.7810 | 0.25 |
| 0.6506 | 2.0 | 6 | 0.8102 | 0.25 |
| 0.6463 | 3.0 | 9 | 0.8313 | 0.25 |
| 0.5813 | 4.0 | 12 | 0.8858 | 0.25 |
| 0.4635 | 5.0 | 15 | 0.8220 | 0.25 |
| 0.3992 | 6.0 | 18 | 0.7226 | 0.5 |
| 0.3281 | 7.0 | 21 | 0.6707 | 0.75 |
| 0.2276 | 8.0 | 24 | 0.7515 | 0.75 |
| 0.1674 | 9.0 | 27 | 0.6971 | 0.75 |
| 0.0873 | 10.0 | 30 | 0.5419 | 0.75 |
| 0.0525 | 11.0 | 33 | 0.5025 | 0.75 |
| 0.0286 | 12.0 | 36 | 0.5229 | 0.75 |
| 0.0149 | 13.0 | 39 | 0.5660 | 0.75 |
| 0.0082 | 14.0 | 42 | 0.6954 | 0.75 |
| 0.006 | 15.0 | 45 | 0.8649 | 0.75 |
| 0.0043 | 16.0 | 48 | 1.0011 | 0.75 |
| 0.0035 | 17.0 | 51 | 1.0909 | 0.75 |
| 0.0021 | 18.0 | 54 | 1.1615 | 0.75 |
| 0.0017 | 19.0 | 57 | 1.2147 | 0.75 |
| 0.0013 | 20.0 | 60 | 1.2585 | 0.75 |
| 0.0016 | 21.0 | 63 | 1.2917 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "deberta-v3-large__sst2__train-8-9", "results": []}]} | SetFit/deberta-v3-large__sst2__train-8-9 | null | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | {} | SetFit/distilbert-base-uncased__TREC-QC__all-train | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | SetFit/distilbert-base-uncased__enron_spam__all-train | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | SetFit/distilbert-base-uncased__ethos_binary__all-train | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | SetFit/distilbert-base-uncased__hate_speech_offensive__all-train | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2707
- Accuracy: 0.517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0943 | 1.0 | 10 | 1.1095 | 0.3 |
| 1.0602 | 2.0 | 20 | 1.1086 | 0.4 |
| 1.0159 | 3.0 | 30 | 1.1165 | 0.4 |
| 0.9027 | 4.0 | 40 | 1.1377 | 0.4 |
| 0.8364 | 5.0 | 50 | 1.0126 | 0.5 |
| 0.6653 | 6.0 | 60 | 0.9298 | 0.5 |
| 0.535 | 7.0 | 70 | 0.9555 | 0.5 |
| 0.3713 | 8.0 | 80 | 0.8543 | 0.4 |
| 0.1633 | 9.0 | 90 | 0.9876 | 0.4 |
| 0.1069 | 10.0 | 100 | 0.8383 | 0.6 |
| 0.0591 | 11.0 | 110 | 0.8056 | 0.6 |
| 0.0344 | 12.0 | 120 | 0.8915 | 0.6 |
| 0.0265 | 13.0 | 130 | 0.8722 | 0.6 |
| 0.0196 | 14.0 | 140 | 1.0064 | 0.6 |
| 0.0158 | 15.0 | 150 | 1.0479 | 0.6 |
| 0.0128 | 16.0 | 160 | 1.0723 | 0.6 |
| 0.0121 | 17.0 | 170 | 1.0758 | 0.6 |
| 0.0093 | 18.0 | 180 | 1.1236 | 0.6 |
| 0.0085 | 19.0 | 190 | 1.1480 | 0.6 |
| 0.0084 | 20.0 | 200 | 1.1651 | 0.6 |
| 0.0077 | 21.0 | 210 | 1.1832 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-16-0", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-0 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0424
- Accuracy: 0.5355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0989 | 1.0 | 10 | 1.1049 | 0.1 |
| 1.0641 | 2.0 | 20 | 1.0768 | 0.3 |
| 0.9742 | 3.0 | 30 | 1.0430 | 0.4 |
| 0.8765 | 4.0 | 40 | 1.0058 | 0.4 |
| 0.6979 | 5.0 | 50 | 0.8488 | 0.7 |
| 0.563 | 6.0 | 60 | 0.7221 | 0.7 |
| 0.4135 | 7.0 | 70 | 0.6587 | 0.8 |
| 0.2509 | 8.0 | 80 | 0.5577 | 0.7 |
| 0.0943 | 9.0 | 90 | 0.5840 | 0.7 |
| 0.0541 | 10.0 | 100 | 0.6959 | 0.7 |
| 0.0362 | 11.0 | 110 | 0.6884 | 0.6 |
| 0.0254 | 12.0 | 120 | 0.9263 | 0.6 |
| 0.0184 | 13.0 | 130 | 0.7992 | 0.6 |
| 0.0172 | 14.0 | 140 | 0.7351 | 0.6 |
| 0.0131 | 15.0 | 150 | 0.7664 | 0.6 |
| 0.0117 | 16.0 | 160 | 0.8262 | 0.6 |
| 0.0101 | 17.0 | 170 | 0.8839 | 0.6 |
| 0.0089 | 18.0 | 180 | 0.9018 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-16-1", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-1 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9210
- Accuracy: 0.5635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0915 | 1.0 | 10 | 1.1051 | 0.4 |
| 1.0663 | 2.0 | 20 | 1.0794 | 0.3 |
| 1.0307 | 3.0 | 30 | 1.0664 | 0.5 |
| 0.9443 | 4.0 | 40 | 1.0729 | 0.5 |
| 0.8373 | 5.0 | 50 | 1.0175 | 0.4 |
| 0.6892 | 6.0 | 60 | 0.9624 | 0.5 |
| 0.538 | 7.0 | 70 | 0.9924 | 0.5 |
| 0.4173 | 8.0 | 80 | 1.0136 | 0.6 |
| 0.1846 | 9.0 | 90 | 1.0683 | 0.6 |
| 0.1125 | 10.0 | 100 | 1.2376 | 0.6 |
| 0.0754 | 11.0 | 110 | 1.2537 | 0.6 |
| 0.0401 | 12.0 | 120 | 1.4387 | 0.6 |
| 0.0285 | 13.0 | 130 | 1.5702 | 0.6 |
| 0.0241 | 14.0 | 140 | 1.6795 | 0.6 |
| 0.0175 | 15.0 | 150 | 1.7228 | 0.6 |
| 0.0147 | 16.0 | 160 | 1.7892 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-16-2", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-2 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0675
- Accuracy: 0.44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0951 | 1.0 | 10 | 1.1346 | 0.1 |
| 1.0424 | 2.0 | 20 | 1.1120 | 0.2 |
| 0.957 | 3.0 | 30 | 1.1002 | 0.3 |
| 0.7889 | 4.0 | 40 | 1.0838 | 0.4 |
| 0.6162 | 5.0 | 50 | 1.0935 | 0.5 |
| 0.4849 | 6.0 | 60 | 1.0867 | 0.5 |
| 0.3089 | 7.0 | 70 | 1.1145 | 0.5 |
| 0.2145 | 8.0 | 80 | 1.1278 | 0.6 |
| 0.0805 | 9.0 | 90 | 1.2801 | 0.6 |
| 0.0497 | 10.0 | 100 | 1.3296 | 0.6 |
| 0.0328 | 11.0 | 110 | 1.2913 | 0.6 |
| 0.0229 | 12.0 | 120 | 1.3692 | 0.6 |
| 0.0186 | 13.0 | 130 | 1.4642 | 0.6 |
| 0.0161 | 14.0 | 140 | 1.5568 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-16-3", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-3 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0903
- Accuracy: 0.4805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0974 | 1.0 | 10 | 1.1139 | 0.1 |
| 1.0637 | 2.0 | 20 | 1.0988 | 0.1 |
| 0.9758 | 3.0 | 30 | 1.1013 | 0.1 |
| 0.9012 | 4.0 | 40 | 1.0769 | 0.3 |
| 0.6993 | 5.0 | 50 | 1.0484 | 0.6 |
| 0.5676 | 6.0 | 60 | 1.0223 | 0.6 |
| 0.4069 | 7.0 | 70 | 0.9190 | 0.6 |
| 0.3192 | 8.0 | 80 | 1.1370 | 0.6 |
| 0.1112 | 9.0 | 90 | 1.1728 | 0.6 |
| 0.07 | 10.0 | 100 | 1.1998 | 0.6 |
| 0.0397 | 11.0 | 110 | 1.3700 | 0.6 |
| 0.027 | 12.0 | 120 | 1.3329 | 0.6 |
| 0.021 | 13.0 | 130 | 1.2697 | 0.6 |
| 0.0177 | 14.0 | 140 | 1.4195 | 0.6 |
| 0.0142 | 15.0 | 150 | 1.5342 | 0.6 |
| 0.0118 | 16.0 | 160 | 1.5999 | 0.6 |
| 0.0108 | 17.0 | 170 | 1.6327 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-16-4", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-4 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9907
- Accuracy: 0.49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0941 | 1.0 | 10 | 1.1287 | 0.2 |
| 1.0481 | 2.0 | 20 | 1.1136 | 0.2 |
| 0.9498 | 3.0 | 30 | 1.1200 | 0.2 |
| 0.8157 | 4.0 | 40 | 1.0771 | 0.2 |
| 0.65 | 5.0 | 50 | 0.9733 | 0.4 |
| 0.5021 | 6.0 | 60 | 1.0626 | 0.4 |
| 0.3358 | 7.0 | 70 | 1.0787 | 0.4 |
| 0.2017 | 8.0 | 80 | 1.3183 | 0.4 |
| 0.088 | 9.0 | 90 | 1.2204 | 0.5 |
| 0.0527 | 10.0 | 100 | 1.6892 | 0.4 |
| 0.0337 | 11.0 | 110 | 1.6967 | 0.5 |
| 0.0238 | 12.0 | 120 | 1.5436 | 0.5 |
| 0.0183 | 13.0 | 130 | 1.7447 | 0.4 |
| 0.0159 | 14.0 | 140 | 1.8999 | 0.4 |
| 0.014 | 15.0 | 150 | 1.9004 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-16-5", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-5 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8331
- Accuracy: 0.625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0881 | 1.0 | 10 | 1.1248 | 0.1 |
| 1.0586 | 2.0 | 20 | 1.1162 | 0.2 |
| 0.9834 | 3.0 | 30 | 1.1199 | 0.3 |
| 0.9271 | 4.0 | 40 | 1.0740 | 0.3 |
| 0.7663 | 5.0 | 50 | 1.0183 | 0.5 |
| 0.6042 | 6.0 | 60 | 1.0259 | 0.5 |
| 0.4482 | 7.0 | 70 | 0.8699 | 0.7 |
| 0.3072 | 8.0 | 80 | 1.0615 | 0.5 |
| 0.1458 | 9.0 | 90 | 1.0164 | 0.5 |
| 0.0838 | 10.0 | 100 | 1.0620 | 0.5 |
| 0.055 | 11.0 | 110 | 1.1829 | 0.5 |
| 0.0347 | 12.0 | 120 | 1.2815 | 0.4 |
| 0.0244 | 13.0 | 130 | 1.2607 | 0.6 |
| 0.0213 | 14.0 | 140 | 1.3695 | 0.5 |
| 0.0169 | 15.0 | 150 | 1.4397 | 0.5 |
| 0.0141 | 16.0 | 160 | 1.4388 | 0.6 |
| 0.0122 | 17.0 | 170 | 1.4242 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-16-6", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-6 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9011
- Accuracy: 0.578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0968 | 1.0 | 10 | 1.1309 | 0.0 |
| 1.0709 | 2.0 | 20 | 1.1237 | 0.1 |
| 0.9929 | 3.0 | 30 | 1.1254 | 0.1 |
| 0.878 | 4.0 | 40 | 1.1206 | 0.5 |
| 0.7409 | 5.0 | 50 | 1.0831 | 0.1 |
| 0.5663 | 6.0 | 60 | 0.9830 | 0.6 |
| 0.4105 | 7.0 | 70 | 0.9919 | 0.5 |
| 0.2912 | 8.0 | 80 | 1.0472 | 0.6 |
| 0.1013 | 9.0 | 90 | 1.1617 | 0.4 |
| 0.0611 | 10.0 | 100 | 1.2789 | 0.6 |
| 0.039 | 11.0 | 110 | 1.4091 | 0.4 |
| 0.0272 | 12.0 | 120 | 1.4974 | 0.4 |
| 0.0189 | 13.0 | 130 | 1.4845 | 0.5 |
| 0.018 | 14.0 | 140 | 1.4924 | 0.5 |
| 0.0131 | 15.0 | 150 | 1.5206 | 0.6 |
| 0.0116 | 16.0 | 160 | 1.5858 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-16-7", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-7 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0704
- Accuracy: 0.394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1031 | 1.0 | 10 | 1.1286 | 0.1 |
| 1.0648 | 2.0 | 20 | 1.1157 | 0.3 |
| 0.9982 | 3.0 | 30 | 1.1412 | 0.2 |
| 0.9283 | 4.0 | 40 | 1.2053 | 0.2 |
| 0.7958 | 5.0 | 50 | 1.1466 | 0.2 |
| 0.6668 | 6.0 | 60 | 1.1783 | 0.3 |
| 0.5068 | 7.0 | 70 | 1.2992 | 0.3 |
| 0.3741 | 8.0 | 80 | 1.3483 | 0.3 |
| 0.1653 | 9.0 | 90 | 1.4533 | 0.2 |
| 0.0946 | 10.0 | 100 | 1.6292 | 0.2 |
| 0.0569 | 11.0 | 110 | 1.8381 | 0.2 |
| 0.0346 | 12.0 | 120 | 2.0781 | 0.2 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-16-8", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-8 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1121
- Accuracy: 0.16
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1038 | 1.0 | 10 | 1.1243 | 0.1 |
| 1.0859 | 2.0 | 20 | 1.1182 | 0.2 |
| 1.0234 | 3.0 | 30 | 1.1442 | 0.3 |
| 0.9493 | 4.0 | 40 | 1.2239 | 0.1 |
| 0.8114 | 5.0 | 50 | 1.2023 | 0.4 |
| 0.6464 | 6.0 | 60 | 1.2329 | 0.4 |
| 0.4731 | 7.0 | 70 | 1.2971 | 0.5 |
| 0.3355 | 8.0 | 80 | 1.3913 | 0.4 |
| 0.1268 | 9.0 | 90 | 1.4670 | 0.5 |
| 0.0747 | 10.0 | 100 | 1.7961 | 0.4 |
| 0.0449 | 11.0 | 110 | 1.8168 | 0.5 |
| 0.0307 | 12.0 | 120 | 1.9307 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-16-9", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-9 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7714
- Accuracy: 0.705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0871 | 1.0 | 19 | 1.0704 | 0.45 |
| 1.0019 | 2.0 | 38 | 1.0167 | 0.55 |
| 0.8412 | 3.0 | 57 | 0.9134 | 0.55 |
| 0.6047 | 4.0 | 76 | 0.8430 | 0.6 |
| 0.3746 | 5.0 | 95 | 0.8315 | 0.6 |
| 0.1885 | 6.0 | 114 | 0.8585 | 0.6 |
| 0.0772 | 7.0 | 133 | 0.9443 | 0.65 |
| 0.0312 | 8.0 | 152 | 1.1019 | 0.65 |
| 0.0161 | 9.0 | 171 | 1.1420 | 0.65 |
| 0.0102 | 10.0 | 190 | 1.2773 | 0.65 |
| 0.0077 | 11.0 | 209 | 1.2454 | 0.65 |
| 0.0064 | 12.0 | 228 | 1.2785 | 0.65 |
| 0.006 | 13.0 | 247 | 1.3834 | 0.65 |
| 0.0045 | 14.0 | 266 | 1.4139 | 0.65 |
| 0.0043 | 15.0 | 285 | 1.4056 | 0.65 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-32-0", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-0 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0606
- Accuracy: 0.4745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0941 | 1.0 | 19 | 1.1045 | 0.2 |
| 0.9967 | 2.0 | 38 | 1.1164 | 0.35 |
| 0.8164 | 3.0 | 57 | 1.1570 | 0.4 |
| 0.5884 | 4.0 | 76 | 1.2403 | 0.35 |
| 0.3322 | 5.0 | 95 | 1.3815 | 0.35 |
| 0.156 | 6.0 | 114 | 1.8102 | 0.3 |
| 0.0576 | 7.0 | 133 | 2.1439 | 0.4 |
| 0.0227 | 8.0 | 152 | 2.4368 | 0.3 |
| 0.0133 | 9.0 | 171 | 2.5994 | 0.4 |
| 0.009 | 10.0 | 190 | 2.7388 | 0.35 |
| 0.0072 | 11.0 | 209 | 2.8287 | 0.35 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-32-1", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-1 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7136
- Accuracy: 0.679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1052 | 1.0 | 19 | 1.0726 | 0.45 |
| 1.0421 | 2.0 | 38 | 1.0225 | 0.5 |
| 0.9173 | 3.0 | 57 | 0.9164 | 0.6 |
| 0.6822 | 4.0 | 76 | 0.8251 | 0.7 |
| 0.4407 | 5.0 | 95 | 0.8908 | 0.5 |
| 0.2367 | 6.0 | 114 | 0.6772 | 0.75 |
| 0.1145 | 7.0 | 133 | 0.7792 | 0.65 |
| 0.0479 | 8.0 | 152 | 1.0657 | 0.6 |
| 0.0186 | 9.0 | 171 | 1.2228 | 0.65 |
| 0.0111 | 10.0 | 190 | 1.1100 | 0.6 |
| 0.0083 | 11.0 | 209 | 1.1991 | 0.65 |
| 0.0067 | 12.0 | 228 | 1.2654 | 0.65 |
| 0.0061 | 13.0 | 247 | 1.2837 | 0.65 |
| 0.0046 | 14.0 | 266 | 1.2860 | 0.6 |
| 0.0043 | 15.0 | 285 | 1.3160 | 0.65 |
| 0.0037 | 16.0 | 304 | 1.3323 | 0.65 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-32-2", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-2 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8286
- Accuracy: 0.661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1041 | 1.0 | 19 | 1.0658 | 0.5 |
| 1.009 | 2.0 | 38 | 0.9892 | 0.7 |
| 0.7925 | 3.0 | 57 | 0.8516 | 0.7 |
| 0.5279 | 4.0 | 76 | 0.7877 | 0.65 |
| 0.2932 | 5.0 | 95 | 0.7592 | 0.65 |
| 0.1166 | 6.0 | 114 | 0.9437 | 0.65 |
| 0.044 | 7.0 | 133 | 1.0315 | 0.75 |
| 0.0197 | 8.0 | 152 | 1.3513 | 0.55 |
| 0.0126 | 9.0 | 171 | 1.1702 | 0.7 |
| 0.0083 | 10.0 | 190 | 1.2272 | 0.7 |
| 0.0068 | 11.0 | 209 | 1.2889 | 0.7 |
| 0.0059 | 12.0 | 228 | 1.3073 | 0.7 |
| 0.0052 | 13.0 | 247 | 1.3595 | 0.7 |
| 0.0041 | 14.0 | 266 | 1.4443 | 0.7 |
| 0.0038 | 15.0 | 285 | 1.4709 | 0.7 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-32-3", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-3 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7384
- Accuracy: 0.724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1013 | 1.0 | 19 | 1.0733 | 0.55 |
| 1.0226 | 2.0 | 38 | 1.0064 | 0.65 |
| 0.8539 | 3.0 | 57 | 0.8758 | 0.75 |
| 0.584 | 4.0 | 76 | 0.6941 | 0.7 |
| 0.2813 | 5.0 | 95 | 0.5151 | 0.7 |
| 0.1122 | 6.0 | 114 | 0.4351 | 0.8 |
| 0.0432 | 7.0 | 133 | 0.4896 | 0.85 |
| 0.0199 | 8.0 | 152 | 0.5391 | 0.85 |
| 0.0126 | 9.0 | 171 | 0.5200 | 0.85 |
| 0.0085 | 10.0 | 190 | 0.5622 | 0.85 |
| 0.0069 | 11.0 | 209 | 0.5950 | 0.85 |
| 0.0058 | 12.0 | 228 | 0.6015 | 0.85 |
| 0.0053 | 13.0 | 247 | 0.6120 | 0.85 |
| 0.0042 | 14.0 | 266 | 0.6347 | 0.85 |
| 0.0039 | 15.0 | 285 | 0.6453 | 0.85 |
| 0.0034 | 16.0 | 304 | 0.6660 | 0.85 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-32-4", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-4 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1327
- Accuracy: 0.57
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0972 | 1.0 | 19 | 1.0470 | 0.45 |
| 0.9738 | 2.0 | 38 | 0.9244 | 0.65 |
| 0.7722 | 3.0 | 57 | 0.8612 | 0.65 |
| 0.4929 | 4.0 | 76 | 0.6759 | 0.75 |
| 0.2435 | 5.0 | 95 | 0.7273 | 0.7 |
| 0.0929 | 6.0 | 114 | 0.6444 | 0.85 |
| 0.0357 | 7.0 | 133 | 0.7671 | 0.8 |
| 0.0173 | 8.0 | 152 | 0.7599 | 0.75 |
| 0.0121 | 9.0 | 171 | 0.8140 | 0.8 |
| 0.0081 | 10.0 | 190 | 0.7861 | 0.8 |
| 0.0066 | 11.0 | 209 | 0.8318 | 0.8 |
| 0.0057 | 12.0 | 228 | 0.8777 | 0.8 |
| 0.0053 | 13.0 | 247 | 0.8501 | 0.8 |
| 0.004 | 14.0 | 266 | 0.8603 | 0.8 |
| 0.004 | 15.0 | 285 | 0.8787 | 0.8 |
| 0.0034 | 16.0 | 304 | 0.8969 | 0.8 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-32-5", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-5 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0523
- Accuracy: 0.663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0957 | 1.0 | 19 | 1.0696 | 0.6 |
| 1.0107 | 2.0 | 38 | 1.0047 | 0.55 |
| 0.8257 | 3.0 | 57 | 0.8358 | 0.8 |
| 0.6006 | 4.0 | 76 | 0.7641 | 0.6 |
| 0.4172 | 5.0 | 95 | 0.5931 | 0.8 |
| 0.2639 | 6.0 | 114 | 0.5570 | 0.7 |
| 0.1314 | 7.0 | 133 | 0.5017 | 0.65 |
| 0.0503 | 8.0 | 152 | 0.3115 | 0.75 |
| 0.023 | 9.0 | 171 | 0.4353 | 0.85 |
| 0.0128 | 10.0 | 190 | 0.5461 | 0.75 |
| 0.0092 | 11.0 | 209 | 0.5045 | 0.8 |
| 0.007 | 12.0 | 228 | 0.5014 | 0.8 |
| 0.0064 | 13.0 | 247 | 0.5070 | 0.8 |
| 0.0049 | 14.0 | 266 | 0.4681 | 0.8 |
| 0.0044 | 15.0 | 285 | 0.4701 | 0.8 |
| 0.0039 | 16.0 | 304 | 0.4862 | 0.8 |
| 0.0036 | 17.0 | 323 | 0.4742 | 0.8 |
| 0.0035 | 18.0 | 342 | 0.4652 | 0.8 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-32-6", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-6 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8210
- Accuracy: 0.6305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0989 | 1.0 | 19 | 1.0655 | 0.4 |
| 1.0102 | 2.0 | 38 | 0.9927 | 0.6 |
| 0.8063 | 3.0 | 57 | 0.9117 | 0.5 |
| 0.5284 | 4.0 | 76 | 0.8058 | 0.55 |
| 0.2447 | 5.0 | 95 | 0.8393 | 0.45 |
| 0.098 | 6.0 | 114 | 0.8438 | 0.6 |
| 0.0388 | 7.0 | 133 | 1.1901 | 0.45 |
| 0.0188 | 8.0 | 152 | 1.4429 | 0.45 |
| 0.0121 | 9.0 | 171 | 1.3648 | 0.4 |
| 0.0082 | 10.0 | 190 | 1.4768 | 0.4 |
| 0.0066 | 11.0 | 209 | 1.4830 | 0.45 |
| 0.0057 | 12.0 | 228 | 1.4936 | 0.45 |
| 0.0053 | 13.0 | 247 | 1.5649 | 0.4 |
| 0.0041 | 14.0 | 266 | 1.6306 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-32-7", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-7 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9191
- Accuracy: 0.632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1008 | 1.0 | 19 | 1.0877 | 0.4 |
| 1.0354 | 2.0 | 38 | 1.0593 | 0.35 |
| 0.8765 | 3.0 | 57 | 0.9722 | 0.5 |
| 0.6365 | 4.0 | 76 | 0.9271 | 0.55 |
| 0.3944 | 5.0 | 95 | 0.7852 | 0.5 |
| 0.2219 | 6.0 | 114 | 0.9360 | 0.55 |
| 0.126 | 7.0 | 133 | 1.0610 | 0.55 |
| 0.0389 | 8.0 | 152 | 1.0884 | 0.6 |
| 0.0191 | 9.0 | 171 | 1.3483 | 0.55 |
| 0.0108 | 10.0 | 190 | 1.4226 | 0.55 |
| 0.0082 | 11.0 | 209 | 1.4270 | 0.55 |
| 0.0065 | 12.0 | 228 | 1.5074 | 0.55 |
| 0.0059 | 13.0 | 247 | 1.5577 | 0.55 |
| 0.0044 | 14.0 | 266 | 1.5798 | 0.55 |
| 0.0042 | 15.0 | 285 | 1.6196 | 0.55 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-32-8", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-8 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7075
- Accuracy: 0.692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1054 | 1.0 | 19 | 1.0938 | 0.35 |
| 1.0338 | 2.0 | 38 | 1.0563 | 0.65 |
| 0.8622 | 3.0 | 57 | 0.9372 | 0.6 |
| 0.5919 | 4.0 | 76 | 0.8461 | 0.6 |
| 0.3357 | 5.0 | 95 | 1.0206 | 0.45 |
| 0.1621 | 6.0 | 114 | 0.9802 | 0.7 |
| 0.0637 | 7.0 | 133 | 1.2434 | 0.65 |
| 0.0261 | 8.0 | 152 | 1.3865 | 0.65 |
| 0.0156 | 9.0 | 171 | 1.4414 | 0.7 |
| 0.01 | 10.0 | 190 | 1.5502 | 0.7 |
| 0.0079 | 11.0 | 209 | 1.6102 | 0.7 |
| 0.0062 | 12.0 | 228 | 1.6525 | 0.7 |
| 0.0058 | 13.0 | 247 | 1.6884 | 0.7 |
| 0.0046 | 14.0 | 266 | 1.7479 | 0.7 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-32-9", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-9 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1097
- Accuracy: 0.132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1065 | 1.0 | 5 | 1.1287 | 0.0 |
| 1.0592 | 2.0 | 10 | 1.1729 | 0.0 |
| 1.0059 | 3.0 | 15 | 1.1959 | 0.0 |
| 0.9129 | 4.0 | 20 | 1.2410 | 0.0 |
| 0.8231 | 5.0 | 25 | 1.2820 | 0.0 |
| 0.7192 | 6.0 | 30 | 1.3361 | 0.0 |
| 0.6121 | 7.0 | 35 | 1.4176 | 0.0 |
| 0.5055 | 8.0 | 40 | 1.5111 | 0.0 |
| 0.4002 | 9.0 | 45 | 1.5572 | 0.0 |
| 0.3788 | 10.0 | 50 | 1.6733 | 0.0 |
| 0.2755 | 11.0 | 55 | 1.7381 | 0.2 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-8-0", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-0 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1013
- Accuracy: 0.0915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0866 | 1.0 | 5 | 1.1363 | 0.0 |
| 1.0439 | 2.0 | 10 | 1.1803 | 0.0 |
| 1.0227 | 3.0 | 15 | 1.2162 | 0.2 |
| 0.9111 | 4.0 | 20 | 1.2619 | 0.0 |
| 0.8243 | 5.0 | 25 | 1.2929 | 0.2 |
| 0.7488 | 6.0 | 30 | 1.3010 | 0.2 |
| 0.62 | 7.0 | 35 | 1.3011 | 0.2 |
| 0.5054 | 8.0 | 40 | 1.2931 | 0.4 |
| 0.4191 | 9.0 | 45 | 1.3274 | 0.4 |
| 0.4107 | 10.0 | 50 | 1.3259 | 0.4 |
| 0.3376 | 11.0 | 55 | 1.2800 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-8-1", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-1 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1019
- Accuracy: 0.139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1082 | 1.0 | 5 | 1.1432 | 0.0 |
| 1.0524 | 2.0 | 10 | 1.1613 | 0.0 |
| 1.0641 | 3.0 | 15 | 1.1547 | 0.0 |
| 0.9592 | 4.0 | 20 | 1.1680 | 0.0 |
| 0.9085 | 5.0 | 25 | 1.1762 | 0.0 |
| 0.8508 | 6.0 | 30 | 1.1809 | 0.2 |
| 0.7263 | 7.0 | 35 | 1.1912 | 0.2 |
| 0.6448 | 8.0 | 40 | 1.2100 | 0.2 |
| 0.5378 | 9.0 | 45 | 1.2037 | 0.2 |
| 0.5031 | 10.0 | 50 | 1.2096 | 0.2 |
| 0.4041 | 11.0 | 55 | 1.2203 | 0.2 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-8-2", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-2 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9681
- Accuracy: 0.549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1073 | 1.0 | 5 | 1.1393 | 0.0 |
| 1.0392 | 2.0 | 10 | 1.1729 | 0.0 |
| 1.0302 | 3.0 | 15 | 1.1694 | 0.2 |
| 0.9176 | 4.0 | 20 | 1.1846 | 0.2 |
| 0.8339 | 5.0 | 25 | 1.1663 | 0.2 |
| 0.7533 | 6.0 | 30 | 1.1513 | 0.4 |
| 0.6327 | 7.0 | 35 | 1.1474 | 0.4 |
| 0.4402 | 8.0 | 40 | 1.1385 | 0.4 |
| 0.3752 | 9.0 | 45 | 1.0965 | 0.2 |
| 0.3448 | 10.0 | 50 | 1.0357 | 0.2 |
| 0.2582 | 11.0 | 55 | 1.0438 | 0.2 |
| 0.1903 | 12.0 | 60 | 1.0561 | 0.2 |
| 0.1479 | 13.0 | 65 | 1.0569 | 0.2 |
| 0.1129 | 14.0 | 70 | 1.0455 | 0.2 |
| 0.1071 | 15.0 | 75 | 1.0416 | 0.4 |
| 0.0672 | 16.0 | 80 | 1.1164 | 0.4 |
| 0.0561 | 17.0 | 85 | 1.1846 | 0.6 |
| 0.0463 | 18.0 | 90 | 1.2040 | 0.6 |
| 0.0431 | 19.0 | 95 | 1.2078 | 0.6 |
| 0.0314 | 20.0 | 100 | 1.2368 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-8-3", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-3 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1045
- Accuracy: 0.128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1115 | 1.0 | 5 | 1.1174 | 0.0 |
| 1.0518 | 2.0 | 10 | 1.1379 | 0.0 |
| 1.0445 | 3.0 | 15 | 1.1287 | 0.0 |
| 0.9306 | 4.0 | 20 | 1.1324 | 0.2 |
| 0.8242 | 5.0 | 25 | 1.1219 | 0.2 |
| 0.7986 | 6.0 | 30 | 1.1369 | 0.4 |
| 0.7369 | 7.0 | 35 | 1.1732 | 0.2 |
| 0.534 | 8.0 | 40 | 1.1828 | 0.6 |
| 0.4285 | 9.0 | 45 | 1.1482 | 0.6 |
| 0.3691 | 10.0 | 50 | 1.1401 | 0.6 |
| 0.3215 | 11.0 | 55 | 1.1286 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-8-4", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-4 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7214
- Accuracy: 0.37
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0995 | 1.0 | 5 | 1.1301 | 0.0 |
| 1.0227 | 2.0 | 10 | 1.1727 | 0.0 |
| 1.0337 | 3.0 | 15 | 1.1734 | 0.2 |
| 0.9137 | 4.0 | 20 | 1.1829 | 0.2 |
| 0.8065 | 5.0 | 25 | 1.1496 | 0.4 |
| 0.7038 | 6.0 | 30 | 1.1101 | 0.4 |
| 0.6246 | 7.0 | 35 | 1.0982 | 0.2 |
| 0.4481 | 8.0 | 40 | 1.0913 | 0.2 |
| 0.3696 | 9.0 | 45 | 1.0585 | 0.4 |
| 0.3137 | 10.0 | 50 | 1.0418 | 0.4 |
| 0.2482 | 11.0 | 55 | 1.0078 | 0.4 |
| 0.196 | 12.0 | 60 | 0.9887 | 0.6 |
| 0.1344 | 13.0 | 65 | 0.9719 | 0.6 |
| 0.1014 | 14.0 | 70 | 1.0053 | 0.6 |
| 0.111 | 15.0 | 75 | 0.9653 | 0.6 |
| 0.0643 | 16.0 | 80 | 0.9018 | 0.6 |
| 0.0559 | 17.0 | 85 | 0.9393 | 0.6 |
| 0.0412 | 18.0 | 90 | 1.0210 | 0.6 |
| 0.0465 | 19.0 | 95 | 0.9965 | 0.6 |
| 0.0328 | 20.0 | 100 | 0.9739 | 0.6 |
| 0.0289 | 21.0 | 105 | 0.9796 | 0.6 |
| 0.0271 | 22.0 | 110 | 0.9968 | 0.6 |
| 0.0239 | 23.0 | 115 | 1.0143 | 0.6 |
| 0.0201 | 24.0 | 120 | 1.0459 | 0.6 |
| 0.0185 | 25.0 | 125 | 1.0698 | 0.6 |
| 0.0183 | 26.0 | 130 | 1.0970 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-8-5", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-5 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1275
- Accuracy: 0.3795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.11 | 1.0 | 5 | 1.1184 | 0.0 |
| 1.0608 | 2.0 | 10 | 1.1227 | 0.0 |
| 1.0484 | 3.0 | 15 | 1.1009 | 0.2 |
| 0.9614 | 4.0 | 20 | 1.1009 | 0.2 |
| 0.8545 | 5.0 | 25 | 1.0772 | 0.2 |
| 0.8241 | 6.0 | 30 | 1.0457 | 0.2 |
| 0.708 | 7.0 | 35 | 1.0301 | 0.4 |
| 0.5045 | 8.0 | 40 | 1.0325 | 0.4 |
| 0.4175 | 9.0 | 45 | 1.0051 | 0.4 |
| 0.3446 | 10.0 | 50 | 0.9610 | 0.4 |
| 0.2851 | 11.0 | 55 | 0.9954 | 0.4 |
| 0.1808 | 12.0 | 60 | 1.0561 | 0.4 |
| 0.1435 | 13.0 | 65 | 1.0218 | 0.4 |
| 0.1019 | 14.0 | 70 | 1.0254 | 0.4 |
| 0.0908 | 15.0 | 75 | 0.9935 | 0.4 |
| 0.0591 | 16.0 | 80 | 1.0090 | 0.4 |
| 0.0512 | 17.0 | 85 | 1.0884 | 0.4 |
| 0.0397 | 18.0 | 90 | 1.2732 | 0.4 |
| 0.039 | 19.0 | 95 | 1.2979 | 0.6 |
| 0.0325 | 20.0 | 100 | 1.2705 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-8-6", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-6 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1206
- Accuracy: 0.0555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1186 | 1.0 | 5 | 1.1631 | 0.0 |
| 1.058 | 2.0 | 10 | 1.1986 | 0.0 |
| 1.081 | 3.0 | 15 | 1.2111 | 0.0 |
| 1.0118 | 4.0 | 20 | 1.2373 | 0.0 |
| 0.9404 | 5.0 | 25 | 1.2645 | 0.0 |
| 0.9146 | 6.0 | 30 | 1.3258 | 0.0 |
| 0.8285 | 7.0 | 35 | 1.3789 | 0.0 |
| 0.6422 | 8.0 | 40 | 1.3783 | 0.0 |
| 0.6156 | 9.0 | 45 | 1.3691 | 0.0 |
| 0.5321 | 10.0 | 50 | 1.3693 | 0.0 |
| 0.4504 | 11.0 | 55 | 1.4000 | 0.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-8-7", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-7 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0005
- Accuracy: 0.518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1029 | 1.0 | 5 | 1.1295 | 0.0 |
| 1.0472 | 2.0 | 10 | 1.1531 | 0.0 |
| 1.054 | 3.0 | 15 | 1.1475 | 0.0 |
| 0.9366 | 4.0 | 20 | 1.1515 | 0.0 |
| 0.8698 | 5.0 | 25 | 1.1236 | 0.4 |
| 0.8148 | 6.0 | 30 | 1.0716 | 0.6 |
| 0.6884 | 7.0 | 35 | 1.0662 | 0.6 |
| 0.5641 | 8.0 | 40 | 1.0671 | 0.6 |
| 0.5 | 9.0 | 45 | 1.0282 | 0.6 |
| 0.3882 | 10.0 | 50 | 1.0500 | 0.6 |
| 0.3522 | 11.0 | 55 | 1.1381 | 0.6 |
| 0.2492 | 12.0 | 60 | 1.1278 | 0.6 |
| 0.2063 | 13.0 | 65 | 1.0731 | 0.6 |
| 0.1608 | 14.0 | 70 | 1.1339 | 0.6 |
| 0.1448 | 15.0 | 75 | 1.1892 | 0.6 |
| 0.0925 | 16.0 | 80 | 1.1840 | 0.6 |
| 0.0768 | 17.0 | 85 | 1.0608 | 0.6 |
| 0.0585 | 18.0 | 90 | 1.1073 | 0.6 |
| 0.0592 | 19.0 | 95 | 1.3134 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-8-8", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-8 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0959
- Accuracy: 0.093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1068 | 1.0 | 5 | 1.1545 | 0.0 |
| 1.0494 | 2.0 | 10 | 1.1971 | 0.0 |
| 1.0612 | 3.0 | 15 | 1.2164 | 0.0 |
| 0.9517 | 4.0 | 20 | 1.2545 | 0.0 |
| 0.8874 | 5.0 | 25 | 1.2699 | 0.0 |
| 0.8598 | 6.0 | 30 | 1.2835 | 0.0 |
| 0.7006 | 7.0 | 35 | 1.3139 | 0.0 |
| 0.5969 | 8.0 | 40 | 1.3116 | 0.2 |
| 0.4769 | 9.0 | 45 | 1.3124 | 0.4 |
| 0.4352 | 10.0 | 50 | 1.3541 | 0.4 |
| 0.3231 | 11.0 | 55 | 1.3919 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__hate_speech_offensive__train-8-9", "results": []}]} | SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-9 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__all-train
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2496
- Accuracy: 0.8962
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3643 | 1.0 | 433 | 0.2496 | 0.8962 |
| 0.196 | 2.0 | 866 | 0.2548 | 0.9110 |
| 0.0915 | 3.0 | 1299 | 0.4483 | 0.8957 |
| 0.0505 | 4.0 | 1732 | 0.4968 | 0.9044 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased__sst2__all-train", "results": []}]} | SetFit/distilbert-base-uncased__sst2__all-train | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6903
- Accuracy: 0.5091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6934 | 1.0 | 7 | 0.7142 | 0.2857 |
| 0.6703 | 2.0 | 14 | 0.7379 | 0.2857 |
| 0.6282 | 3.0 | 21 | 0.7769 | 0.2857 |
| 0.5193 | 4.0 | 28 | 0.8799 | 0.2857 |
| 0.5104 | 5.0 | 35 | 0.8380 | 0.4286 |
| 0.2504 | 6.0 | 42 | 0.8622 | 0.4286 |
| 0.1794 | 7.0 | 49 | 0.9227 | 0.4286 |
| 0.1156 | 8.0 | 56 | 0.8479 | 0.4286 |
| 0.0709 | 9.0 | 63 | 1.0929 | 0.2857 |
| 0.0471 | 10.0 | 70 | 1.2189 | 0.2857 |
| 0.0288 | 11.0 | 77 | 1.2026 | 0.4286 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-16-0", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-16-0 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6012
- Accuracy: 0.6766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6983 | 1.0 | 7 | 0.7036 | 0.2857 |
| 0.6836 | 2.0 | 14 | 0.7181 | 0.2857 |
| 0.645 | 3.0 | 21 | 0.7381 | 0.2857 |
| 0.5902 | 4.0 | 28 | 0.7746 | 0.2857 |
| 0.5799 | 5.0 | 35 | 0.7242 | 0.5714 |
| 0.3584 | 6.0 | 42 | 0.6935 | 0.5714 |
| 0.2596 | 7.0 | 49 | 0.7041 | 0.5714 |
| 0.1815 | 8.0 | 56 | 0.5930 | 0.7143 |
| 0.0827 | 9.0 | 63 | 0.6976 | 0.7143 |
| 0.0613 | 10.0 | 70 | 0.7346 | 0.7143 |
| 0.0356 | 11.0 | 77 | 0.6992 | 0.5714 |
| 0.0158 | 12.0 | 84 | 0.7328 | 0.5714 |
| 0.013 | 13.0 | 91 | 0.7819 | 0.5714 |
| 0.0103 | 14.0 | 98 | 0.8589 | 0.5714 |
| 0.0087 | 15.0 | 105 | 0.9177 | 0.5714 |
| 0.0076 | 16.0 | 112 | 0.9519 | 0.5714 |
| 0.0078 | 17.0 | 119 | 0.9556 | 0.5714 |
| 0.006 | 18.0 | 126 | 0.9542 | 0.5714 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-16-1", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-16-1 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6748
- Accuracy: 0.6315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7043 | 1.0 | 7 | 0.7054 | 0.2857 |
| 0.6711 | 2.0 | 14 | 0.7208 | 0.2857 |
| 0.6311 | 3.0 | 21 | 0.7365 | 0.2857 |
| 0.551 | 4.0 | 28 | 0.7657 | 0.5714 |
| 0.5599 | 5.0 | 35 | 0.6915 | 0.5714 |
| 0.3167 | 6.0 | 42 | 0.7134 | 0.5714 |
| 0.2489 | 7.0 | 49 | 0.7892 | 0.5714 |
| 0.1985 | 8.0 | 56 | 0.6756 | 0.7143 |
| 0.0864 | 9.0 | 63 | 0.8059 | 0.5714 |
| 0.0903 | 10.0 | 70 | 0.8165 | 0.7143 |
| 0.0429 | 11.0 | 77 | 0.7947 | 0.7143 |
| 0.0186 | 12.0 | 84 | 0.8570 | 0.7143 |
| 0.0146 | 13.0 | 91 | 0.9346 | 0.7143 |
| 0.011 | 14.0 | 98 | 0.9804 | 0.7143 |
| 0.0098 | 15.0 | 105 | 1.0136 | 0.7143 |
| 0.0086 | 16.0 | 112 | 1.0424 | 0.7143 |
| 0.0089 | 17.0 | 119 | 1.0736 | 0.7143 |
| 0.0068 | 18.0 | 126 | 1.0808 | 0.7143 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-16-2", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-16-2 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7887
- Accuracy: 0.6458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6928 | 1.0 | 7 | 0.6973 | 0.4286 |
| 0.675 | 2.0 | 14 | 0.7001 | 0.4286 |
| 0.6513 | 3.0 | 21 | 0.6959 | 0.4286 |
| 0.5702 | 4.0 | 28 | 0.6993 | 0.4286 |
| 0.5389 | 5.0 | 35 | 0.6020 | 0.7143 |
| 0.3386 | 6.0 | 42 | 0.5326 | 0.5714 |
| 0.2596 | 7.0 | 49 | 0.4943 | 0.7143 |
| 0.1633 | 8.0 | 56 | 0.3589 | 0.8571 |
| 0.1086 | 9.0 | 63 | 0.2924 | 0.8571 |
| 0.0641 | 10.0 | 70 | 0.2687 | 0.8571 |
| 0.0409 | 11.0 | 77 | 0.2202 | 0.8571 |
| 0.0181 | 12.0 | 84 | 0.2445 | 0.8571 |
| 0.0141 | 13.0 | 91 | 0.2885 | 0.8571 |
| 0.0108 | 14.0 | 98 | 0.3069 | 0.8571 |
| 0.009 | 15.0 | 105 | 0.3006 | 0.8571 |
| 0.0084 | 16.0 | 112 | 0.2834 | 0.8571 |
| 0.0088 | 17.0 | 119 | 0.2736 | 0.8571 |
| 0.0062 | 18.0 | 126 | 0.2579 | 0.8571 |
| 0.0058 | 19.0 | 133 | 0.2609 | 0.8571 |
| 0.0057 | 20.0 | 140 | 0.2563 | 0.8571 |
| 0.0049 | 21.0 | 147 | 0.2582 | 0.8571 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-16-3", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-16-3 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1501
- Accuracy: 0.6387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7043 | 1.0 | 7 | 0.7139 | 0.2857 |
| 0.68 | 2.0 | 14 | 0.7398 | 0.2857 |
| 0.641 | 3.0 | 21 | 0.7723 | 0.2857 |
| 0.5424 | 4.0 | 28 | 0.8391 | 0.2857 |
| 0.5988 | 5.0 | 35 | 0.7761 | 0.2857 |
| 0.3698 | 6.0 | 42 | 0.7707 | 0.4286 |
| 0.3204 | 7.0 | 49 | 0.8290 | 0.4286 |
| 0.2882 | 8.0 | 56 | 0.6551 | 0.5714 |
| 0.1512 | 9.0 | 63 | 0.5652 | 0.5714 |
| 0.1302 | 10.0 | 70 | 0.5278 | 0.5714 |
| 0.1043 | 11.0 | 77 | 0.4987 | 0.7143 |
| 0.0272 | 12.0 | 84 | 0.5278 | 0.5714 |
| 0.0201 | 13.0 | 91 | 0.5307 | 0.5714 |
| 0.0129 | 14.0 | 98 | 0.5382 | 0.5714 |
| 0.0117 | 15.0 | 105 | 0.5227 | 0.5714 |
| 0.0094 | 16.0 | 112 | 0.5066 | 0.7143 |
| 0.0104 | 17.0 | 119 | 0.4869 | 0.7143 |
| 0.0069 | 18.0 | 126 | 0.4786 | 0.7143 |
| 0.0062 | 19.0 | 133 | 0.4707 | 0.7143 |
| 0.0065 | 20.0 | 140 | 0.4669 | 0.7143 |
| 0.0051 | 21.0 | 147 | 0.4686 | 0.7143 |
| 0.0049 | 22.0 | 154 | 0.4784 | 0.7143 |
| 0.0046 | 23.0 | 161 | 0.4839 | 0.7143 |
| 0.0039 | 24.0 | 168 | 0.4823 | 0.7143 |
| 0.0044 | 25.0 | 175 | 0.4791 | 0.7143 |
| 0.0037 | 26.0 | 182 | 0.4778 | 0.7143 |
| 0.0038 | 27.0 | 189 | 0.4770 | 0.7143 |
| 0.0036 | 28.0 | 196 | 0.4750 | 0.7143 |
| 0.0031 | 29.0 | 203 | 0.4766 | 0.7143 |
| 0.0031 | 30.0 | 210 | 0.4754 | 0.7143 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-16-4", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-16-4 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6537
- Accuracy: 0.6332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6925 | 1.0 | 7 | 0.6966 | 0.2857 |
| 0.6703 | 2.0 | 14 | 0.7045 | 0.2857 |
| 0.6404 | 3.0 | 21 | 0.7205 | 0.2857 |
| 0.555 | 4.0 | 28 | 0.7548 | 0.2857 |
| 0.5179 | 5.0 | 35 | 0.6745 | 0.5714 |
| 0.3038 | 6.0 | 42 | 0.7260 | 0.5714 |
| 0.2089 | 7.0 | 49 | 0.8016 | 0.5714 |
| 0.1303 | 8.0 | 56 | 0.8202 | 0.5714 |
| 0.0899 | 9.0 | 63 | 0.9966 | 0.5714 |
| 0.0552 | 10.0 | 70 | 1.1887 | 0.5714 |
| 0.0333 | 11.0 | 77 | 1.2163 | 0.5714 |
| 0.0169 | 12.0 | 84 | 1.2874 | 0.5714 |
| 0.0136 | 13.0 | 91 | 1.3598 | 0.5714 |
| 0.0103 | 14.0 | 98 | 1.4237 | 0.5714 |
| 0.0089 | 15.0 | 105 | 1.4758 | 0.5714 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-16-5", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-16-5 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8356
- Accuracy: 0.6480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6978 | 1.0 | 7 | 0.6807 | 0.4286 |
| 0.6482 | 2.0 | 14 | 0.6775 | 0.4286 |
| 0.6051 | 3.0 | 21 | 0.6623 | 0.5714 |
| 0.486 | 4.0 | 28 | 0.6710 | 0.5714 |
| 0.4612 | 5.0 | 35 | 0.5325 | 0.7143 |
| 0.2233 | 6.0 | 42 | 0.4992 | 0.7143 |
| 0.1328 | 7.0 | 49 | 0.4753 | 0.7143 |
| 0.0905 | 8.0 | 56 | 0.2416 | 1.0 |
| 0.0413 | 9.0 | 63 | 0.2079 | 1.0 |
| 0.0356 | 10.0 | 70 | 0.2234 | 0.8571 |
| 0.0217 | 11.0 | 77 | 0.2639 | 0.8571 |
| 0.0121 | 12.0 | 84 | 0.2977 | 0.8571 |
| 0.0105 | 13.0 | 91 | 0.3468 | 0.8571 |
| 0.0085 | 14.0 | 98 | 0.3912 | 0.8571 |
| 0.0077 | 15.0 | 105 | 0.4000 | 0.8571 |
| 0.0071 | 16.0 | 112 | 0.4015 | 0.8571 |
| 0.0078 | 17.0 | 119 | 0.3865 | 0.8571 |
| 0.0059 | 18.0 | 126 | 0.3603 | 0.8571 |
| 0.0051 | 19.0 | 133 | 0.3231 | 0.8571 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-16-6", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-16-6 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6952
- Accuracy: 0.5025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6949 | 1.0 | 7 | 0.7252 | 0.2857 |
| 0.6678 | 2.0 | 14 | 0.7550 | 0.2857 |
| 0.6299 | 3.0 | 21 | 0.8004 | 0.2857 |
| 0.5596 | 4.0 | 28 | 0.8508 | 0.2857 |
| 0.5667 | 5.0 | 35 | 0.8464 | 0.2857 |
| 0.367 | 6.0 | 42 | 0.8515 | 0.2857 |
| 0.2706 | 7.0 | 49 | 0.9574 | 0.2857 |
| 0.2163 | 8.0 | 56 | 0.9710 | 0.4286 |
| 0.1024 | 9.0 | 63 | 1.1607 | 0.1429 |
| 0.1046 | 10.0 | 70 | 1.3779 | 0.1429 |
| 0.0483 | 11.0 | 77 | 1.4876 | 0.1429 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-16-7", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-16-7 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6895
- Accuracy: 0.5222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6899 | 1.0 | 7 | 0.7055 | 0.2857 |
| 0.6793 | 2.0 | 14 | 0.7205 | 0.2857 |
| 0.6291 | 3.0 | 21 | 0.7460 | 0.2857 |
| 0.5659 | 4.0 | 28 | 0.8041 | 0.2857 |
| 0.5607 | 5.0 | 35 | 0.7785 | 0.4286 |
| 0.3349 | 6.0 | 42 | 0.8163 | 0.4286 |
| 0.2436 | 7.0 | 49 | 0.9101 | 0.2857 |
| 0.1734 | 8.0 | 56 | 0.8632 | 0.5714 |
| 0.1122 | 9.0 | 63 | 0.9851 | 0.5714 |
| 0.0661 | 10.0 | 70 | 1.0835 | 0.5714 |
| 0.0407 | 11.0 | 77 | 1.1656 | 0.5714 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-16-8", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-16-8 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6915
- Accuracy: 0.5157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6868 | 1.0 | 7 | 0.7121 | 0.1429 |
| 0.6755 | 2.0 | 14 | 0.7234 | 0.1429 |
| 0.6389 | 3.0 | 21 | 0.7384 | 0.2857 |
| 0.5575 | 4.0 | 28 | 0.7884 | 0.2857 |
| 0.4972 | 5.0 | 35 | 0.7767 | 0.4286 |
| 0.2821 | 6.0 | 42 | 0.8275 | 0.4286 |
| 0.1859 | 7.0 | 49 | 0.9283 | 0.2857 |
| 0.1388 | 8.0 | 56 | 0.9384 | 0.4286 |
| 0.078 | 9.0 | 63 | 1.1973 | 0.4286 |
| 0.0462 | 10.0 | 70 | 1.4016 | 0.4286 |
| 0.0319 | 11.0 | 77 | 1.4087 | 0.4286 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-16-9", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-16-9 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8558
- Accuracy: 0.7183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7088 | 1.0 | 13 | 0.6819 | 0.6154 |
| 0.635 | 2.0 | 26 | 0.6318 | 0.7692 |
| 0.547 | 3.0 | 39 | 0.5356 | 0.7692 |
| 0.3497 | 4.0 | 52 | 0.4456 | 0.6923 |
| 0.1979 | 5.0 | 65 | 0.3993 | 0.7692 |
| 0.098 | 6.0 | 78 | 0.3613 | 0.7692 |
| 0.0268 | 7.0 | 91 | 0.3561 | 0.9231 |
| 0.0137 | 8.0 | 104 | 0.3755 | 0.9231 |
| 0.0083 | 9.0 | 117 | 0.4194 | 0.7692 |
| 0.0065 | 10.0 | 130 | 0.4446 | 0.7692 |
| 0.005 | 11.0 | 143 | 0.4527 | 0.7692 |
| 0.0038 | 12.0 | 156 | 0.4645 | 0.7692 |
| 0.0033 | 13.0 | 169 | 0.4735 | 0.7692 |
| 0.0033 | 14.0 | 182 | 0.4874 | 0.7692 |
| 0.0029 | 15.0 | 195 | 0.5041 | 0.7692 |
| 0.0025 | 16.0 | 208 | 0.5148 | 0.7692 |
| 0.0024 | 17.0 | 221 | 0.5228 | 0.7692 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-32-0", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-32-0 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6492
- Accuracy: 0.6551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7106 | 1.0 | 13 | 0.6850 | 0.6154 |
| 0.631 | 2.0 | 26 | 0.6632 | 0.6923 |
| 0.5643 | 3.0 | 39 | 0.6247 | 0.7692 |
| 0.3992 | 4.0 | 52 | 0.5948 | 0.7692 |
| 0.1928 | 5.0 | 65 | 0.5803 | 0.7692 |
| 0.0821 | 6.0 | 78 | 0.6404 | 0.6923 |
| 0.0294 | 7.0 | 91 | 0.7387 | 0.6923 |
| 0.0141 | 8.0 | 104 | 0.8270 | 0.6923 |
| 0.0082 | 9.0 | 117 | 0.8496 | 0.6923 |
| 0.0064 | 10.0 | 130 | 0.8679 | 0.6923 |
| 0.005 | 11.0 | 143 | 0.8914 | 0.6923 |
| 0.0036 | 12.0 | 156 | 0.9278 | 0.6923 |
| 0.0031 | 13.0 | 169 | 0.9552 | 0.6923 |
| 0.0029 | 14.0 | 182 | 0.9745 | 0.6923 |
| 0.0028 | 15.0 | 195 | 0.9785 | 0.6923 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-32-1", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-32-1 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4805
- Accuracy: 0.7699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7124 | 1.0 | 13 | 0.6882 | 0.5385 |
| 0.6502 | 2.0 | 26 | 0.6715 | 0.5385 |
| 0.6001 | 3.0 | 39 | 0.6342 | 0.6154 |
| 0.455 | 4.0 | 52 | 0.5713 | 0.7692 |
| 0.2605 | 5.0 | 65 | 0.5562 | 0.7692 |
| 0.1258 | 6.0 | 78 | 0.6799 | 0.7692 |
| 0.0444 | 7.0 | 91 | 0.8096 | 0.7692 |
| 0.0175 | 8.0 | 104 | 0.9281 | 0.6923 |
| 0.0106 | 9.0 | 117 | 0.9826 | 0.6923 |
| 0.0077 | 10.0 | 130 | 1.0254 | 0.7692 |
| 0.0056 | 11.0 | 143 | 1.0667 | 0.7692 |
| 0.0042 | 12.0 | 156 | 1.1003 | 0.7692 |
| 0.0036 | 13.0 | 169 | 1.1299 | 0.7692 |
| 0.0034 | 14.0 | 182 | 1.1623 | 0.6923 |
| 0.003 | 15.0 | 195 | 1.1938 | 0.6923 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-32-2", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-32-2 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5694
- Accuracy: 0.7073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7118 | 1.0 | 13 | 0.6844 | 0.5385 |
| 0.6587 | 2.0 | 26 | 0.6707 | 0.6154 |
| 0.6067 | 3.0 | 39 | 0.6295 | 0.5385 |
| 0.4714 | 4.0 | 52 | 0.5811 | 0.6923 |
| 0.2444 | 5.0 | 65 | 0.5932 | 0.7692 |
| 0.1007 | 6.0 | 78 | 0.7386 | 0.6923 |
| 0.0332 | 7.0 | 91 | 0.6962 | 0.6154 |
| 0.0147 | 8.0 | 104 | 0.8200 | 0.7692 |
| 0.0083 | 9.0 | 117 | 0.9250 | 0.7692 |
| 0.0066 | 10.0 | 130 | 0.9345 | 0.7692 |
| 0.005 | 11.0 | 143 | 0.9313 | 0.7692 |
| 0.0036 | 12.0 | 156 | 0.9356 | 0.7692 |
| 0.0031 | 13.0 | 169 | 0.9395 | 0.7692 |
| 0.0029 | 14.0 | 182 | 0.9504 | 0.7692 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-32-3", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-32-3 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5001
- Accuracy: 0.7650
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7175 | 1.0 | 13 | 0.6822 | 0.5385 |
| 0.6559 | 2.0 | 26 | 0.6533 | 0.6154 |
| 0.6052 | 3.0 | 39 | 0.5762 | 0.7692 |
| 0.4587 | 4.0 | 52 | 0.4477 | 0.8462 |
| 0.2459 | 5.0 | 65 | 0.4288 | 0.7692 |
| 0.1001 | 6.0 | 78 | 0.5219 | 0.7692 |
| 0.0308 | 7.0 | 91 | 0.8540 | 0.7692 |
| 0.014 | 8.0 | 104 | 0.7789 | 0.7692 |
| 0.0083 | 9.0 | 117 | 0.7996 | 0.7692 |
| 0.0064 | 10.0 | 130 | 0.8342 | 0.7692 |
| 0.0049 | 11.0 | 143 | 0.8612 | 0.7692 |
| 0.0036 | 12.0 | 156 | 0.8834 | 0.7692 |
| 0.0032 | 13.0 | 169 | 0.9067 | 0.7692 |
| 0.003 | 14.0 | 182 | 0.9332 | 0.7692 |
| 0.0028 | 15.0 | 195 | 0.9511 | 0.7692 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-32-4", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-32-4 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6248
- Accuracy: 0.6826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7136 | 1.0 | 13 | 0.6850 | 0.5385 |
| 0.6496 | 2.0 | 26 | 0.6670 | 0.6154 |
| 0.5895 | 3.0 | 39 | 0.6464 | 0.7692 |
| 0.4271 | 4.0 | 52 | 0.6478 | 0.7692 |
| 0.2182 | 5.0 | 65 | 0.6809 | 0.6923 |
| 0.103 | 6.0 | 78 | 0.9119 | 0.6923 |
| 0.0326 | 7.0 | 91 | 1.0718 | 0.6923 |
| 0.0154 | 8.0 | 104 | 1.0721 | 0.7692 |
| 0.0087 | 9.0 | 117 | 1.1416 | 0.7692 |
| 0.0067 | 10.0 | 130 | 1.2088 | 0.7692 |
| 0.005 | 11.0 | 143 | 1.2656 | 0.7692 |
| 0.0037 | 12.0 | 156 | 1.3104 | 0.7692 |
| 0.0032 | 13.0 | 169 | 1.3428 | 0.6923 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-32-5", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-32-5 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5072
- Accuracy: 0.7650
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7057 | 1.0 | 13 | 0.6704 | 0.6923 |
| 0.6489 | 2.0 | 26 | 0.6228 | 0.8462 |
| 0.5475 | 3.0 | 39 | 0.5079 | 0.8462 |
| 0.4014 | 4.0 | 52 | 0.4203 | 0.8462 |
| 0.1923 | 5.0 | 65 | 0.3872 | 0.8462 |
| 0.1014 | 6.0 | 78 | 0.4909 | 0.8462 |
| 0.0349 | 7.0 | 91 | 0.5460 | 0.8462 |
| 0.0173 | 8.0 | 104 | 0.4867 | 0.8462 |
| 0.0098 | 9.0 | 117 | 0.5274 | 0.8462 |
| 0.0075 | 10.0 | 130 | 0.6086 | 0.8462 |
| 0.0057 | 11.0 | 143 | 0.6604 | 0.8462 |
| 0.0041 | 12.0 | 156 | 0.6904 | 0.8462 |
| 0.0037 | 13.0 | 169 | 0.7164 | 0.8462 |
| 0.0034 | 14.0 | 182 | 0.7368 | 0.8462 |
| 0.0031 | 15.0 | 195 | 0.7565 | 0.8462 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-32-6", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-32-6 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6736
- Accuracy: 0.5931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7094 | 1.0 | 13 | 0.6887 | 0.5385 |
| 0.651 | 2.0 | 26 | 0.6682 | 0.6923 |
| 0.6084 | 3.0 | 39 | 0.6412 | 0.6923 |
| 0.4547 | 4.0 | 52 | 0.6095 | 0.6923 |
| 0.2903 | 5.0 | 65 | 0.6621 | 0.6923 |
| 0.1407 | 6.0 | 78 | 0.7130 | 0.7692 |
| 0.0444 | 7.0 | 91 | 0.9007 | 0.6923 |
| 0.0176 | 8.0 | 104 | 0.9525 | 0.7692 |
| 0.0098 | 9.0 | 117 | 1.0289 | 0.7692 |
| 0.0071 | 10.0 | 130 | 1.0876 | 0.7692 |
| 0.0052 | 11.0 | 143 | 1.1431 | 0.6923 |
| 0.0038 | 12.0 | 156 | 1.1687 | 0.7692 |
| 0.0034 | 13.0 | 169 | 1.1792 | 0.7692 |
| 0.0031 | 14.0 | 182 | 1.2033 | 0.7692 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-32-7", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-32-7 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6880
- Accuracy: 0.5014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.712 | 1.0 | 13 | 0.6936 | 0.5385 |
| 0.665 | 2.0 | 26 | 0.6960 | 0.3846 |
| 0.6112 | 3.0 | 39 | 0.7138 | 0.3846 |
| 0.4521 | 4.0 | 52 | 0.8243 | 0.4615 |
| 0.2627 | 5.0 | 65 | 0.7723 | 0.6154 |
| 0.0928 | 6.0 | 78 | 1.2666 | 0.5385 |
| 0.0312 | 7.0 | 91 | 1.2306 | 0.6154 |
| 0.0132 | 8.0 | 104 | 1.3385 | 0.6154 |
| 0.0082 | 9.0 | 117 | 1.4584 | 0.6154 |
| 0.0063 | 10.0 | 130 | 1.5429 | 0.6154 |
| 0.0049 | 11.0 | 143 | 1.5913 | 0.6154 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-32-8", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-32-8 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5625
- Accuracy: 0.7353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7057 | 1.0 | 13 | 0.6805 | 0.5385 |
| 0.6642 | 2.0 | 26 | 0.6526 | 0.7692 |
| 0.5869 | 3.0 | 39 | 0.5773 | 0.8462 |
| 0.4085 | 4.0 | 52 | 0.4959 | 0.8462 |
| 0.2181 | 5.0 | 65 | 0.4902 | 0.6923 |
| 0.069 | 6.0 | 78 | 0.5065 | 0.8462 |
| 0.0522 | 7.0 | 91 | 0.6082 | 0.7692 |
| 0.0135 | 8.0 | 104 | 0.6924 | 0.7692 |
| 0.0084 | 9.0 | 117 | 0.5921 | 0.7692 |
| 0.0061 | 10.0 | 130 | 0.6477 | 0.7692 |
| 0.0047 | 11.0 | 143 | 0.6648 | 0.7692 |
| 0.0035 | 12.0 | 156 | 0.6640 | 0.7692 |
| 0.0031 | 13.0 | 169 | 0.6615 | 0.7692 |
| 0.0029 | 14.0 | 182 | 0.6605 | 0.7692 |
| 0.0026 | 15.0 | 195 | 0.6538 | 0.8462 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-32-9", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-32-9 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6920
- Accuracy: 0.5189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6916 | 1.0 | 3 | 0.7035 | 0.25 |
| 0.6852 | 2.0 | 6 | 0.7139 | 0.25 |
| 0.6533 | 3.0 | 9 | 0.7192 | 0.25 |
| 0.6211 | 4.0 | 12 | 0.7322 | 0.25 |
| 0.5522 | 5.0 | 15 | 0.7561 | 0.25 |
| 0.488 | 6.0 | 18 | 0.7883 | 0.25 |
| 0.48 | 7.0 | 21 | 0.8224 | 0.25 |
| 0.3948 | 8.0 | 24 | 0.8605 | 0.25 |
| 0.3478 | 9.0 | 27 | 0.8726 | 0.25 |
| 0.2723 | 10.0 | 30 | 0.8885 | 0.25 |
| 0.2174 | 11.0 | 33 | 0.8984 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-0", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-8-0 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6930
- Accuracy: 0.5047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7082 | 1.0 | 3 | 0.7048 | 0.25 |
| 0.6761 | 2.0 | 6 | 0.7249 | 0.25 |
| 0.6653 | 3.0 | 9 | 0.7423 | 0.25 |
| 0.6212 | 4.0 | 12 | 0.7727 | 0.25 |
| 0.5932 | 5.0 | 15 | 0.8098 | 0.25 |
| 0.5427 | 6.0 | 18 | 0.8496 | 0.25 |
| 0.5146 | 7.0 | 21 | 0.8992 | 0.25 |
| 0.4356 | 8.0 | 24 | 0.9494 | 0.25 |
| 0.4275 | 9.0 | 27 | 0.9694 | 0.25 |
| 0.3351 | 10.0 | 30 | 0.9968 | 0.25 |
| 0.2812 | 11.0 | 33 | 1.0056 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-1", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-8-1 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6932
- Accuracy: 0.4931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7081 | 1.0 | 3 | 0.7031 | 0.25 |
| 0.6853 | 2.0 | 6 | 0.7109 | 0.25 |
| 0.6696 | 3.0 | 9 | 0.7211 | 0.25 |
| 0.6174 | 4.0 | 12 | 0.7407 | 0.25 |
| 0.5717 | 5.0 | 15 | 0.7625 | 0.25 |
| 0.5096 | 6.0 | 18 | 0.7732 | 0.25 |
| 0.488 | 7.0 | 21 | 0.7798 | 0.25 |
| 0.4023 | 8.0 | 24 | 0.7981 | 0.25 |
| 0.3556 | 9.0 | 27 | 0.8110 | 0.25 |
| 0.2714 | 10.0 | 30 | 0.8269 | 0.25 |
| 0.2295 | 11.0 | 33 | 0.8276 | 0.25 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-2", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-8-2 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6914
- Accuracy: 0.5195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6931 | 1.0 | 3 | 0.7039 | 0.25 |
| 0.6615 | 2.0 | 6 | 0.7186 | 0.25 |
| 0.653 | 3.0 | 9 | 0.7334 | 0.25 |
| 0.601 | 4.0 | 12 | 0.7592 | 0.25 |
| 0.5555 | 5.0 | 15 | 0.7922 | 0.25 |
| 0.4832 | 6.0 | 18 | 0.8179 | 0.25 |
| 0.4565 | 7.0 | 21 | 0.8285 | 0.25 |
| 0.3996 | 8.0 | 24 | 0.8559 | 0.25 |
| 0.3681 | 9.0 | 27 | 0.8586 | 0.5 |
| 0.2901 | 10.0 | 30 | 0.8646 | 0.5 |
| 0.241 | 11.0 | 33 | 0.8524 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-3", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-8-3 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6921
- Accuracy: 0.5107
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7163 | 1.0 | 3 | 0.7100 | 0.25 |
| 0.6785 | 2.0 | 6 | 0.7209 | 0.25 |
| 0.6455 | 3.0 | 9 | 0.7321 | 0.25 |
| 0.6076 | 4.0 | 12 | 0.7517 | 0.25 |
| 0.5593 | 5.0 | 15 | 0.7780 | 0.25 |
| 0.5202 | 6.0 | 18 | 0.7990 | 0.25 |
| 0.4967 | 7.0 | 21 | 0.8203 | 0.25 |
| 0.4158 | 8.0 | 24 | 0.8497 | 0.25 |
| 0.3997 | 9.0 | 27 | 0.8638 | 0.25 |
| 0.3064 | 10.0 | 30 | 0.8732 | 0.25 |
| 0.2618 | 11.0 | 33 | 0.8669 | 0.25 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-4", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-8-4 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8419
- Accuracy: 0.6172
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7057 | 1.0 | 3 | 0.6848 | 0.75 |
| 0.6681 | 2.0 | 6 | 0.6875 | 0.5 |
| 0.6591 | 3.0 | 9 | 0.6868 | 0.25 |
| 0.6052 | 4.0 | 12 | 0.6943 | 0.25 |
| 0.557 | 5.0 | 15 | 0.7078 | 0.25 |
| 0.4954 | 6.0 | 18 | 0.7168 | 0.25 |
| 0.4593 | 7.0 | 21 | 0.7185 | 0.25 |
| 0.3936 | 8.0 | 24 | 0.7212 | 0.25 |
| 0.3699 | 9.0 | 27 | 0.6971 | 0.5 |
| 0.2916 | 10.0 | 30 | 0.6827 | 0.5 |
| 0.2511 | 11.0 | 33 | 0.6464 | 0.5 |
| 0.2109 | 12.0 | 36 | 0.6344 | 0.75 |
| 0.1655 | 13.0 | 39 | 0.6377 | 0.75 |
| 0.1412 | 14.0 | 42 | 0.6398 | 0.75 |
| 0.1157 | 15.0 | 45 | 0.6315 | 0.75 |
| 0.0895 | 16.0 | 48 | 0.6210 | 0.75 |
| 0.0783 | 17.0 | 51 | 0.5918 | 0.75 |
| 0.0606 | 18.0 | 54 | 0.5543 | 0.75 |
| 0.0486 | 19.0 | 57 | 0.5167 | 0.75 |
| 0.0405 | 20.0 | 60 | 0.4862 | 0.75 |
| 0.0376 | 21.0 | 63 | 0.4644 | 0.75 |
| 0.0294 | 22.0 | 66 | 0.4497 | 0.75 |
| 0.0261 | 23.0 | 69 | 0.4428 | 0.75 |
| 0.0238 | 24.0 | 72 | 0.4408 | 0.75 |
| 0.0217 | 25.0 | 75 | 0.4392 | 0.75 |
| 0.0187 | 26.0 | 78 | 0.4373 | 0.75 |
| 0.0177 | 27.0 | 81 | 0.4360 | 0.75 |
| 0.0136 | 28.0 | 84 | 0.4372 | 0.75 |
| 0.0144 | 29.0 | 87 | 0.4368 | 0.75 |
| 0.014 | 30.0 | 90 | 0.4380 | 0.75 |
| 0.0137 | 31.0 | 93 | 0.4383 | 0.75 |
| 0.0133 | 32.0 | 96 | 0.4409 | 0.75 |
| 0.013 | 33.0 | 99 | 0.4380 | 0.75 |
| 0.0096 | 34.0 | 102 | 0.4358 | 0.75 |
| 0.012 | 35.0 | 105 | 0.4339 | 0.75 |
| 0.0122 | 36.0 | 108 | 0.4305 | 0.75 |
| 0.0109 | 37.0 | 111 | 0.4267 | 0.75 |
| 0.0121 | 38.0 | 114 | 0.4231 | 0.75 |
| 0.0093 | 39.0 | 117 | 0.4209 | 0.75 |
| 0.0099 | 40.0 | 120 | 0.4199 | 0.75 |
| 0.0091 | 41.0 | 123 | 0.4184 | 0.75 |
| 0.0116 | 42.0 | 126 | 0.4173 | 0.75 |
| 0.01 | 43.0 | 129 | 0.4163 | 0.75 |
| 0.0098 | 44.0 | 132 | 0.4153 | 0.75 |
| 0.0101 | 45.0 | 135 | 0.4155 | 0.75 |
| 0.0088 | 46.0 | 138 | 0.4149 | 0.75 |
| 0.0087 | 47.0 | 141 | 0.4150 | 0.75 |
| 0.0093 | 48.0 | 144 | 0.4147 | 0.75 |
| 0.0081 | 49.0 | 147 | 0.4147 | 0.75 |
| 0.009 | 50.0 | 150 | 0.4150 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-5", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-8-5 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5336
- Accuracy: 0.7523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7161 | 1.0 | 3 | 0.6941 | 0.5 |
| 0.6786 | 2.0 | 6 | 0.7039 | 0.25 |
| 0.6586 | 3.0 | 9 | 0.7090 | 0.25 |
| 0.6121 | 4.0 | 12 | 0.7183 | 0.25 |
| 0.5696 | 5.0 | 15 | 0.7266 | 0.25 |
| 0.522 | 6.0 | 18 | 0.7305 | 0.25 |
| 0.4899 | 7.0 | 21 | 0.7339 | 0.25 |
| 0.3985 | 8.0 | 24 | 0.7429 | 0.25 |
| 0.3758 | 9.0 | 27 | 0.7224 | 0.25 |
| 0.2876 | 10.0 | 30 | 0.7068 | 0.5 |
| 0.2498 | 11.0 | 33 | 0.6751 | 0.75 |
| 0.1921 | 12.0 | 36 | 0.6487 | 0.75 |
| 0.1491 | 13.0 | 39 | 0.6261 | 0.75 |
| 0.1276 | 14.0 | 42 | 0.6102 | 0.75 |
| 0.0996 | 15.0 | 45 | 0.5964 | 0.75 |
| 0.073 | 16.0 | 48 | 0.6019 | 0.75 |
| 0.0627 | 17.0 | 51 | 0.5933 | 0.75 |
| 0.053 | 18.0 | 54 | 0.5768 | 0.75 |
| 0.0403 | 19.0 | 57 | 0.5698 | 0.75 |
| 0.0328 | 20.0 | 60 | 0.5656 | 0.75 |
| 0.03 | 21.0 | 63 | 0.5634 | 0.75 |
| 0.025 | 22.0 | 66 | 0.5620 | 0.75 |
| 0.0209 | 23.0 | 69 | 0.5623 | 0.75 |
| 0.0214 | 24.0 | 72 | 0.5606 | 0.75 |
| 0.0191 | 25.0 | 75 | 0.5565 | 0.75 |
| 0.0173 | 26.0 | 78 | 0.5485 | 0.75 |
| 0.0175 | 27.0 | 81 | 0.5397 | 0.75 |
| 0.0132 | 28.0 | 84 | 0.5322 | 0.75 |
| 0.0138 | 29.0 | 87 | 0.5241 | 0.75 |
| 0.0128 | 30.0 | 90 | 0.5235 | 0.75 |
| 0.0126 | 31.0 | 93 | 0.5253 | 0.75 |
| 0.012 | 32.0 | 96 | 0.5317 | 0.75 |
| 0.0118 | 33.0 | 99 | 0.5342 | 0.75 |
| 0.0092 | 34.0 | 102 | 0.5388 | 0.75 |
| 0.0117 | 35.0 | 105 | 0.5414 | 0.75 |
| 0.0124 | 36.0 | 108 | 0.5453 | 0.75 |
| 0.0109 | 37.0 | 111 | 0.5506 | 0.75 |
| 0.0112 | 38.0 | 114 | 0.5555 | 0.75 |
| 0.0087 | 39.0 | 117 | 0.5597 | 0.75 |
| 0.01 | 40.0 | 120 | 0.5640 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-6", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-8-6 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6950
- Accuracy: 0.4618
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7156 | 1.0 | 3 | 0.6965 | 0.25 |
| 0.6645 | 2.0 | 6 | 0.7059 | 0.25 |
| 0.6368 | 3.0 | 9 | 0.7179 | 0.25 |
| 0.5944 | 4.0 | 12 | 0.7408 | 0.25 |
| 0.5369 | 5.0 | 15 | 0.7758 | 0.25 |
| 0.449 | 6.0 | 18 | 0.8009 | 0.25 |
| 0.4352 | 7.0 | 21 | 0.8209 | 0.5 |
| 0.3462 | 8.0 | 24 | 0.8470 | 0.5 |
| 0.3028 | 9.0 | 27 | 0.8579 | 0.5 |
| 0.2365 | 10.0 | 30 | 0.8704 | 0.5 |
| 0.2023 | 11.0 | 33 | 0.8770 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-7", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-8-7 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6925
- Accuracy: 0.5200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7061 | 1.0 | 3 | 0.6899 | 0.75 |
| 0.6627 | 2.0 | 6 | 0.7026 | 0.25 |
| 0.644 | 3.0 | 9 | 0.7158 | 0.25 |
| 0.6087 | 4.0 | 12 | 0.7325 | 0.25 |
| 0.5602 | 5.0 | 15 | 0.7555 | 0.25 |
| 0.5034 | 6.0 | 18 | 0.7725 | 0.25 |
| 0.4672 | 7.0 | 21 | 0.7983 | 0.25 |
| 0.403 | 8.0 | 24 | 0.8314 | 0.25 |
| 0.3571 | 9.0 | 27 | 0.8555 | 0.25 |
| 0.2792 | 10.0 | 30 | 0.9065 | 0.25 |
| 0.2373 | 11.0 | 33 | 0.9286 | 0.25 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-8", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-8-8 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6925
- Accuracy: 0.5140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7204 | 1.0 | 3 | 0.7025 | 0.5 |
| 0.6885 | 2.0 | 6 | 0.7145 | 0.5 |
| 0.6662 | 3.0 | 9 | 0.7222 | 0.5 |
| 0.6182 | 4.0 | 12 | 0.7427 | 0.25 |
| 0.5707 | 5.0 | 15 | 0.7773 | 0.25 |
| 0.5247 | 6.0 | 18 | 0.8137 | 0.25 |
| 0.5003 | 7.0 | 21 | 0.8556 | 0.25 |
| 0.4195 | 8.0 | 24 | 0.9089 | 0.5 |
| 0.387 | 9.0 | 27 | 0.9316 | 0.25 |
| 0.2971 | 10.0 | 30 | 0.9558 | 0.25 |
| 0.2581 | 11.0 | 33 | 0.9420 | 0.25 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst2__train-8-9", "results": []}]} | SetFit/distilbert-base-uncased__sst2__train-8-9 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst5__all-train
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3757
- Accuracy: 0.5045
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2492 | 1.0 | 534 | 1.1163 | 0.4991 |
| 0.9937 | 2.0 | 1068 | 1.1232 | 0.5122 |
| 0.7867 | 3.0 | 1602 | 1.2097 | 0.5045 |
| 0.595 | 4.0 | 2136 | 1.3757 | 0.5045 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__sst5__all-train", "results": []}]} | SetFit/distilbert-base-uncased__sst5__all-train | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__all-train
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3193
- Accuracy: 0.9485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1992 | 1.0 | 500 | 0.1236 | 0.963 |
| 0.084 | 2.0 | 1000 | 0.1428 | 0.963 |
| 0.0333 | 3.0 | 1500 | 0.1906 | 0.965 |
| 0.0159 | 4.0 | 2000 | 0.3193 | 0.9485 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased__subj__all-train", "results": []}]} | SetFit/distilbert-base-uncased__subj__all-train | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4440
- Accuracy: 0.789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7163 | 1.0 | 3 | 0.6868 | 0.5 |
| 0.6683 | 2.0 | 6 | 0.6804 | 0.75 |
| 0.6375 | 3.0 | 9 | 0.6702 | 0.75 |
| 0.5997 | 4.0 | 12 | 0.6686 | 0.75 |
| 0.5345 | 5.0 | 15 | 0.6720 | 0.75 |
| 0.4673 | 6.0 | 18 | 0.6646 | 0.75 |
| 0.4214 | 7.0 | 21 | 0.6494 | 0.75 |
| 0.3439 | 8.0 | 24 | 0.6313 | 0.75 |
| 0.3157 | 9.0 | 27 | 0.6052 | 0.75 |
| 0.2329 | 10.0 | 30 | 0.5908 | 0.75 |
| 0.1989 | 11.0 | 33 | 0.5768 | 0.75 |
| 0.1581 | 12.0 | 36 | 0.5727 | 0.75 |
| 0.1257 | 13.0 | 39 | 0.5678 | 0.75 |
| 0.1005 | 14.0 | 42 | 0.5518 | 0.75 |
| 0.0836 | 15.0 | 45 | 0.5411 | 0.75 |
| 0.0611 | 16.0 | 48 | 0.5320 | 0.75 |
| 0.0503 | 17.0 | 51 | 0.5299 | 0.75 |
| 0.0407 | 18.0 | 54 | 0.5368 | 0.75 |
| 0.0332 | 19.0 | 57 | 0.5455 | 0.75 |
| 0.0293 | 20.0 | 60 | 0.5525 | 0.75 |
| 0.0254 | 21.0 | 63 | 0.5560 | 0.75 |
| 0.0231 | 22.0 | 66 | 0.5569 | 0.75 |
| 0.0201 | 23.0 | 69 | 0.5572 | 0.75 |
| 0.0179 | 24.0 | 72 | 0.5575 | 0.75 |
| 0.0184 | 25.0 | 75 | 0.5547 | 0.75 |
| 0.0148 | 26.0 | 78 | 0.5493 | 0.75 |
| 0.0149 | 27.0 | 81 | 0.5473 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-0", "results": []}]} | SetFit/distilbert-base-uncased__subj__train-8-0 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5488
- Accuracy: 0.791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.703 | 1.0 | 3 | 0.6906 | 0.5 |
| 0.666 | 2.0 | 6 | 0.6945 | 0.25 |
| 0.63 | 3.0 | 9 | 0.6885 | 0.5 |
| 0.588 | 4.0 | 12 | 0.6888 | 0.25 |
| 0.5181 | 5.0 | 15 | 0.6899 | 0.25 |
| 0.4508 | 6.0 | 18 | 0.6770 | 0.5 |
| 0.4025 | 7.0 | 21 | 0.6579 | 0.5 |
| 0.3361 | 8.0 | 24 | 0.6392 | 0.5 |
| 0.2919 | 9.0 | 27 | 0.6113 | 0.5 |
| 0.2151 | 10.0 | 30 | 0.5774 | 0.75 |
| 0.1728 | 11.0 | 33 | 0.5248 | 0.75 |
| 0.1313 | 12.0 | 36 | 0.4824 | 0.75 |
| 0.1046 | 13.0 | 39 | 0.4456 | 0.75 |
| 0.0858 | 14.0 | 42 | 0.4076 | 0.75 |
| 0.0679 | 15.0 | 45 | 0.3755 | 0.75 |
| 0.0485 | 16.0 | 48 | 0.3422 | 0.75 |
| 0.0416 | 17.0 | 51 | 0.3055 | 0.75 |
| 0.0358 | 18.0 | 54 | 0.2731 | 1.0 |
| 0.0277 | 19.0 | 57 | 0.2443 | 1.0 |
| 0.0234 | 20.0 | 60 | 0.2187 | 1.0 |
| 0.0223 | 21.0 | 63 | 0.1960 | 1.0 |
| 0.0187 | 22.0 | 66 | 0.1762 | 1.0 |
| 0.017 | 23.0 | 69 | 0.1629 | 1.0 |
| 0.0154 | 24.0 | 72 | 0.1543 | 1.0 |
| 0.0164 | 25.0 | 75 | 0.1476 | 1.0 |
| 0.0131 | 26.0 | 78 | 0.1423 | 1.0 |
| 0.0139 | 27.0 | 81 | 0.1387 | 1.0 |
| 0.0107 | 28.0 | 84 | 0.1360 | 1.0 |
| 0.0108 | 29.0 | 87 | 0.1331 | 1.0 |
| 0.0105 | 30.0 | 90 | 0.1308 | 1.0 |
| 0.0106 | 31.0 | 93 | 0.1276 | 1.0 |
| 0.0104 | 32.0 | 96 | 0.1267 | 1.0 |
| 0.0095 | 33.0 | 99 | 0.1255 | 1.0 |
| 0.0076 | 34.0 | 102 | 0.1243 | 1.0 |
| 0.0094 | 35.0 | 105 | 0.1235 | 1.0 |
| 0.0103 | 36.0 | 108 | 0.1228 | 1.0 |
| 0.0086 | 37.0 | 111 | 0.1231 | 1.0 |
| 0.0094 | 38.0 | 114 | 0.1236 | 1.0 |
| 0.0074 | 39.0 | 117 | 0.1240 | 1.0 |
| 0.0085 | 40.0 | 120 | 0.1246 | 1.0 |
| 0.0079 | 41.0 | 123 | 0.1253 | 1.0 |
| 0.0088 | 42.0 | 126 | 0.1248 | 1.0 |
| 0.0082 | 43.0 | 129 | 0.1244 | 1.0 |
| 0.0082 | 44.0 | 132 | 0.1234 | 1.0 |
| 0.0082 | 45.0 | 135 | 0.1223 | 1.0 |
| 0.0071 | 46.0 | 138 | 0.1212 | 1.0 |
| 0.0073 | 47.0 | 141 | 0.1208 | 1.0 |
| 0.0081 | 48.0 | 144 | 0.1205 | 1.0 |
| 0.0067 | 49.0 | 147 | 0.1202 | 1.0 |
| 0.0077 | 50.0 | 150 | 0.1202 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-1", "results": []}]} | SetFit/distilbert-base-uncased__subj__train-8-1 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3081
- Accuracy: 0.8755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7146 | 1.0 | 3 | 0.6798 | 0.75 |
| 0.6737 | 2.0 | 6 | 0.6847 | 0.75 |
| 0.6519 | 3.0 | 9 | 0.6783 | 0.75 |
| 0.6105 | 4.0 | 12 | 0.6812 | 0.25 |
| 0.5463 | 5.0 | 15 | 0.6869 | 0.25 |
| 0.4922 | 6.0 | 18 | 0.6837 | 0.5 |
| 0.4543 | 7.0 | 21 | 0.6716 | 0.5 |
| 0.3856 | 8.0 | 24 | 0.6613 | 0.75 |
| 0.3475 | 9.0 | 27 | 0.6282 | 0.75 |
| 0.2717 | 10.0 | 30 | 0.6045 | 0.75 |
| 0.2347 | 11.0 | 33 | 0.5620 | 0.75 |
| 0.1979 | 12.0 | 36 | 0.5234 | 1.0 |
| 0.1535 | 13.0 | 39 | 0.4771 | 1.0 |
| 0.1332 | 14.0 | 42 | 0.4277 | 1.0 |
| 0.1041 | 15.0 | 45 | 0.3785 | 1.0 |
| 0.082 | 16.0 | 48 | 0.3318 | 1.0 |
| 0.0672 | 17.0 | 51 | 0.2885 | 1.0 |
| 0.0538 | 18.0 | 54 | 0.2568 | 1.0 |
| 0.0412 | 19.0 | 57 | 0.2356 | 1.0 |
| 0.0361 | 20.0 | 60 | 0.2217 | 1.0 |
| 0.0303 | 21.0 | 63 | 0.2125 | 1.0 |
| 0.0268 | 22.0 | 66 | 0.2060 | 1.0 |
| 0.0229 | 23.0 | 69 | 0.2015 | 1.0 |
| 0.0215 | 24.0 | 72 | 0.1989 | 1.0 |
| 0.0211 | 25.0 | 75 | 0.1969 | 1.0 |
| 0.0172 | 26.0 | 78 | 0.1953 | 1.0 |
| 0.0165 | 27.0 | 81 | 0.1935 | 1.0 |
| 0.0132 | 28.0 | 84 | 0.1923 | 1.0 |
| 0.0146 | 29.0 | 87 | 0.1914 | 1.0 |
| 0.0125 | 30.0 | 90 | 0.1904 | 1.0 |
| 0.0119 | 31.0 | 93 | 0.1897 | 1.0 |
| 0.0122 | 32.0 | 96 | 0.1886 | 1.0 |
| 0.0118 | 33.0 | 99 | 0.1875 | 1.0 |
| 0.0097 | 34.0 | 102 | 0.1866 | 1.0 |
| 0.0111 | 35.0 | 105 | 0.1861 | 1.0 |
| 0.0111 | 36.0 | 108 | 0.1855 | 1.0 |
| 0.0102 | 37.0 | 111 | 0.1851 | 1.0 |
| 0.0109 | 38.0 | 114 | 0.1851 | 1.0 |
| 0.0085 | 39.0 | 117 | 0.1854 | 1.0 |
| 0.0089 | 40.0 | 120 | 0.1855 | 1.0 |
| 0.0092 | 41.0 | 123 | 0.1863 | 1.0 |
| 0.0105 | 42.0 | 126 | 0.1868 | 1.0 |
| 0.0089 | 43.0 | 129 | 0.1874 | 1.0 |
| 0.0091 | 44.0 | 132 | 0.1877 | 1.0 |
| 0.0096 | 45.0 | 135 | 0.1881 | 1.0 |
| 0.0081 | 46.0 | 138 | 0.1881 | 1.0 |
| 0.0086 | 47.0 | 141 | 0.1883 | 1.0 |
| 0.009 | 48.0 | 144 | 0.1884 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-2", "results": []}]} | SetFit/distilbert-base-uncased__subj__train-8-2 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3496
- Accuracy: 0.859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7136 | 1.0 | 3 | 0.6875 | 0.75 |
| 0.6702 | 2.0 | 6 | 0.6824 | 0.75 |
| 0.6456 | 3.0 | 9 | 0.6687 | 0.75 |
| 0.5934 | 4.0 | 12 | 0.6564 | 0.75 |
| 0.537 | 5.0 | 15 | 0.6428 | 0.75 |
| 0.4812 | 6.0 | 18 | 0.6180 | 0.75 |
| 0.4279 | 7.0 | 21 | 0.5864 | 0.75 |
| 0.3608 | 8.0 | 24 | 0.5540 | 0.75 |
| 0.3076 | 9.0 | 27 | 0.5012 | 1.0 |
| 0.2292 | 10.0 | 30 | 0.4497 | 1.0 |
| 0.1991 | 11.0 | 33 | 0.3945 | 1.0 |
| 0.1495 | 12.0 | 36 | 0.3483 | 1.0 |
| 0.1176 | 13.0 | 39 | 0.3061 | 1.0 |
| 0.0947 | 14.0 | 42 | 0.2683 | 1.0 |
| 0.0761 | 15.0 | 45 | 0.2295 | 1.0 |
| 0.0584 | 16.0 | 48 | 0.1996 | 1.0 |
| 0.0451 | 17.0 | 51 | 0.1739 | 1.0 |
| 0.0387 | 18.0 | 54 | 0.1521 | 1.0 |
| 0.0272 | 19.0 | 57 | 0.1333 | 1.0 |
| 0.0247 | 20.0 | 60 | 0.1171 | 1.0 |
| 0.0243 | 21.0 | 63 | 0.1044 | 1.0 |
| 0.0206 | 22.0 | 66 | 0.0943 | 1.0 |
| 0.0175 | 23.0 | 69 | 0.0859 | 1.0 |
| 0.0169 | 24.0 | 72 | 0.0799 | 1.0 |
| 0.0162 | 25.0 | 75 | 0.0746 | 1.0 |
| 0.0137 | 26.0 | 78 | 0.0705 | 1.0 |
| 0.0141 | 27.0 | 81 | 0.0674 | 1.0 |
| 0.0107 | 28.0 | 84 | 0.0654 | 1.0 |
| 0.0117 | 29.0 | 87 | 0.0634 | 1.0 |
| 0.0113 | 30.0 | 90 | 0.0617 | 1.0 |
| 0.0107 | 31.0 | 93 | 0.0599 | 1.0 |
| 0.0106 | 32.0 | 96 | 0.0585 | 1.0 |
| 0.0101 | 33.0 | 99 | 0.0568 | 1.0 |
| 0.0084 | 34.0 | 102 | 0.0553 | 1.0 |
| 0.0101 | 35.0 | 105 | 0.0539 | 1.0 |
| 0.0102 | 36.0 | 108 | 0.0529 | 1.0 |
| 0.009 | 37.0 | 111 | 0.0520 | 1.0 |
| 0.0092 | 38.0 | 114 | 0.0511 | 1.0 |
| 0.0073 | 39.0 | 117 | 0.0504 | 1.0 |
| 0.0081 | 40.0 | 120 | 0.0497 | 1.0 |
| 0.0079 | 41.0 | 123 | 0.0492 | 1.0 |
| 0.0092 | 42.0 | 126 | 0.0488 | 1.0 |
| 0.008 | 43.0 | 129 | 0.0483 | 1.0 |
| 0.0087 | 44.0 | 132 | 0.0479 | 1.0 |
| 0.009 | 45.0 | 135 | 0.0474 | 1.0 |
| 0.0076 | 46.0 | 138 | 0.0470 | 1.0 |
| 0.0075 | 47.0 | 141 | 0.0467 | 1.0 |
| 0.008 | 48.0 | 144 | 0.0465 | 1.0 |
| 0.0069 | 49.0 | 147 | 0.0464 | 1.0 |
| 0.0077 | 50.0 | 150 | 0.0464 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-3", "results": []}]} | SetFit/distilbert-base-uncased__subj__train-8-3 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3305
- Accuracy: 0.8565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6991 | 1.0 | 3 | 0.6772 | 0.75 |
| 0.6707 | 2.0 | 6 | 0.6704 | 0.75 |
| 0.6402 | 3.0 | 9 | 0.6608 | 1.0 |
| 0.5789 | 4.0 | 12 | 0.6547 | 0.75 |
| 0.5211 | 5.0 | 15 | 0.6434 | 0.75 |
| 0.454 | 6.0 | 18 | 0.6102 | 1.0 |
| 0.4187 | 7.0 | 21 | 0.5701 | 1.0 |
| 0.3401 | 8.0 | 24 | 0.5289 | 1.0 |
| 0.3107 | 9.0 | 27 | 0.4737 | 1.0 |
| 0.2381 | 10.0 | 30 | 0.4255 | 1.0 |
| 0.1982 | 11.0 | 33 | 0.3685 | 1.0 |
| 0.1631 | 12.0 | 36 | 0.3200 | 1.0 |
| 0.1234 | 13.0 | 39 | 0.2798 | 1.0 |
| 0.0993 | 14.0 | 42 | 0.2455 | 1.0 |
| 0.0781 | 15.0 | 45 | 0.2135 | 1.0 |
| 0.0586 | 16.0 | 48 | 0.1891 | 1.0 |
| 0.0513 | 17.0 | 51 | 0.1671 | 1.0 |
| 0.043 | 18.0 | 54 | 0.1427 | 1.0 |
| 0.0307 | 19.0 | 57 | 0.1225 | 1.0 |
| 0.0273 | 20.0 | 60 | 0.1060 | 1.0 |
| 0.0266 | 21.0 | 63 | 0.0920 | 1.0 |
| 0.0233 | 22.0 | 66 | 0.0823 | 1.0 |
| 0.0185 | 23.0 | 69 | 0.0751 | 1.0 |
| 0.0173 | 24.0 | 72 | 0.0698 | 1.0 |
| 0.0172 | 25.0 | 75 | 0.0651 | 1.0 |
| 0.0142 | 26.0 | 78 | 0.0613 | 1.0 |
| 0.0151 | 27.0 | 81 | 0.0583 | 1.0 |
| 0.0117 | 28.0 | 84 | 0.0563 | 1.0 |
| 0.0123 | 29.0 | 87 | 0.0546 | 1.0 |
| 0.0121 | 30.0 | 90 | 0.0531 | 1.0 |
| 0.0123 | 31.0 | 93 | 0.0511 | 1.0 |
| 0.0112 | 32.0 | 96 | 0.0496 | 1.0 |
| 0.0103 | 33.0 | 99 | 0.0481 | 1.0 |
| 0.0086 | 34.0 | 102 | 0.0468 | 1.0 |
| 0.0096 | 35.0 | 105 | 0.0457 | 1.0 |
| 0.0107 | 36.0 | 108 | 0.0447 | 1.0 |
| 0.0095 | 37.0 | 111 | 0.0439 | 1.0 |
| 0.0102 | 38.0 | 114 | 0.0429 | 1.0 |
| 0.0077 | 39.0 | 117 | 0.0422 | 1.0 |
| 0.0092 | 40.0 | 120 | 0.0415 | 1.0 |
| 0.0083 | 41.0 | 123 | 0.0409 | 1.0 |
| 0.0094 | 42.0 | 126 | 0.0404 | 1.0 |
| 0.0084 | 43.0 | 129 | 0.0400 | 1.0 |
| 0.0085 | 44.0 | 132 | 0.0396 | 1.0 |
| 0.0092 | 45.0 | 135 | 0.0392 | 1.0 |
| 0.0076 | 46.0 | 138 | 0.0389 | 1.0 |
| 0.0073 | 47.0 | 141 | 0.0388 | 1.0 |
| 0.0085 | 48.0 | 144 | 0.0387 | 1.0 |
| 0.0071 | 49.0 | 147 | 0.0386 | 1.0 |
| 0.0079 | 50.0 | 150 | 0.0386 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-4", "results": []}]} | SetFit/distilbert-base-uncased__subj__train-8-4 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6927
- Accuracy: 0.506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7102 | 1.0 | 3 | 0.6790 | 0.75 |
| 0.6693 | 2.0 | 6 | 0.6831 | 0.75 |
| 0.6438 | 3.0 | 9 | 0.6876 | 0.75 |
| 0.6047 | 4.0 | 12 | 0.6970 | 0.75 |
| 0.547 | 5.0 | 15 | 0.7065 | 0.75 |
| 0.4885 | 6.0 | 18 | 0.7114 | 0.75 |
| 0.4601 | 7.0 | 21 | 0.7147 | 0.5 |
| 0.4017 | 8.0 | 24 | 0.7178 | 0.5 |
| 0.3474 | 9.0 | 27 | 0.7145 | 0.5 |
| 0.2624 | 10.0 | 30 | 0.7153 | 0.5 |
| 0.2175 | 11.0 | 33 | 0.7158 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-5", "results": []}]} | SetFit/distilbert-base-uncased__subj__train-8-5 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6075
- Accuracy: 0.7485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7163 | 1.0 | 3 | 0.6923 | 0.5 |
| 0.6648 | 2.0 | 6 | 0.6838 | 0.5 |
| 0.6329 | 3.0 | 9 | 0.6747 | 0.75 |
| 0.5836 | 4.0 | 12 | 0.6693 | 0.5 |
| 0.5287 | 5.0 | 15 | 0.6670 | 0.25 |
| 0.4585 | 6.0 | 18 | 0.6517 | 0.5 |
| 0.415 | 7.0 | 21 | 0.6290 | 0.5 |
| 0.3353 | 8.0 | 24 | 0.6019 | 0.5 |
| 0.2841 | 9.0 | 27 | 0.5613 | 0.75 |
| 0.2203 | 10.0 | 30 | 0.5222 | 1.0 |
| 0.1743 | 11.0 | 33 | 0.4769 | 1.0 |
| 0.1444 | 12.0 | 36 | 0.4597 | 1.0 |
| 0.1079 | 13.0 | 39 | 0.4462 | 1.0 |
| 0.0891 | 14.0 | 42 | 0.4216 | 1.0 |
| 0.0704 | 15.0 | 45 | 0.3880 | 1.0 |
| 0.0505 | 16.0 | 48 | 0.3663 | 1.0 |
| 0.0428 | 17.0 | 51 | 0.3536 | 1.0 |
| 0.0356 | 18.0 | 54 | 0.3490 | 1.0 |
| 0.0283 | 19.0 | 57 | 0.3531 | 1.0 |
| 0.025 | 20.0 | 60 | 0.3595 | 1.0 |
| 0.0239 | 21.0 | 63 | 0.3594 | 1.0 |
| 0.0202 | 22.0 | 66 | 0.3521 | 1.0 |
| 0.0168 | 23.0 | 69 | 0.3475 | 1.0 |
| 0.0159 | 24.0 | 72 | 0.3458 | 1.0 |
| 0.0164 | 25.0 | 75 | 0.3409 | 1.0 |
| 0.0132 | 26.0 | 78 | 0.3360 | 1.0 |
| 0.0137 | 27.0 | 81 | 0.3302 | 1.0 |
| 0.0112 | 28.0 | 84 | 0.3235 | 1.0 |
| 0.0113 | 29.0 | 87 | 0.3178 | 1.0 |
| 0.0111 | 30.0 | 90 | 0.3159 | 1.0 |
| 0.0113 | 31.0 | 93 | 0.3108 | 1.0 |
| 0.0107 | 32.0 | 96 | 0.3101 | 1.0 |
| 0.0101 | 33.0 | 99 | 0.3100 | 1.0 |
| 0.0083 | 34.0 | 102 | 0.3110 | 1.0 |
| 0.0092 | 35.0 | 105 | 0.3117 | 1.0 |
| 0.0102 | 36.0 | 108 | 0.3104 | 1.0 |
| 0.0086 | 37.0 | 111 | 0.3086 | 1.0 |
| 0.0092 | 38.0 | 114 | 0.3047 | 1.0 |
| 0.0072 | 39.0 | 117 | 0.3024 | 1.0 |
| 0.0079 | 40.0 | 120 | 0.3014 | 1.0 |
| 0.0079 | 41.0 | 123 | 0.2983 | 1.0 |
| 0.0091 | 42.0 | 126 | 0.2948 | 1.0 |
| 0.0077 | 43.0 | 129 | 0.2915 | 1.0 |
| 0.0085 | 44.0 | 132 | 0.2890 | 1.0 |
| 0.009 | 45.0 | 135 | 0.2870 | 1.0 |
| 0.0073 | 46.0 | 138 | 0.2856 | 1.0 |
| 0.0073 | 47.0 | 141 | 0.2844 | 1.0 |
| 0.0076 | 48.0 | 144 | 0.2841 | 1.0 |
| 0.0065 | 49.0 | 147 | 0.2836 | 1.0 |
| 0.0081 | 50.0 | 150 | 0.2835 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-6", "results": []}]} | SetFit/distilbert-base-uncased__subj__train-8-6 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2766
- Accuracy: 0.8845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7044 | 1.0 | 3 | 0.6909 | 0.5 |
| 0.6678 | 2.0 | 6 | 0.6901 | 0.5 |
| 0.6336 | 3.0 | 9 | 0.6807 | 0.5 |
| 0.5926 | 4.0 | 12 | 0.6726 | 0.5 |
| 0.5221 | 5.0 | 15 | 0.6648 | 0.5 |
| 0.4573 | 6.0 | 18 | 0.6470 | 0.5 |
| 0.4177 | 7.0 | 21 | 0.6251 | 0.5 |
| 0.3252 | 8.0 | 24 | 0.5994 | 0.5 |
| 0.2831 | 9.0 | 27 | 0.5529 | 0.5 |
| 0.213 | 10.0 | 30 | 0.5078 | 0.75 |
| 0.1808 | 11.0 | 33 | 0.4521 | 1.0 |
| 0.1355 | 12.0 | 36 | 0.3996 | 1.0 |
| 0.1027 | 13.0 | 39 | 0.3557 | 1.0 |
| 0.0862 | 14.0 | 42 | 0.3121 | 1.0 |
| 0.0682 | 15.0 | 45 | 0.2828 | 1.0 |
| 0.0517 | 16.0 | 48 | 0.2603 | 1.0 |
| 0.0466 | 17.0 | 51 | 0.2412 | 1.0 |
| 0.038 | 18.0 | 54 | 0.2241 | 1.0 |
| 0.0276 | 19.0 | 57 | 0.2096 | 1.0 |
| 0.0246 | 20.0 | 60 | 0.1969 | 1.0 |
| 0.0249 | 21.0 | 63 | 0.1859 | 1.0 |
| 0.0201 | 22.0 | 66 | 0.1770 | 1.0 |
| 0.018 | 23.0 | 69 | 0.1703 | 1.0 |
| 0.0164 | 24.0 | 72 | 0.1670 | 1.0 |
| 0.0172 | 25.0 | 75 | 0.1639 | 1.0 |
| 0.0135 | 26.0 | 78 | 0.1604 | 1.0 |
| 0.014 | 27.0 | 81 | 0.1585 | 1.0 |
| 0.0108 | 28.0 | 84 | 0.1569 | 1.0 |
| 0.0116 | 29.0 | 87 | 0.1549 | 1.0 |
| 0.0111 | 30.0 | 90 | 0.1532 | 1.0 |
| 0.0113 | 31.0 | 93 | 0.1513 | 1.0 |
| 0.0104 | 32.0 | 96 | 0.1503 | 1.0 |
| 0.01 | 33.0 | 99 | 0.1490 | 1.0 |
| 0.0079 | 34.0 | 102 | 0.1479 | 1.0 |
| 0.0097 | 35.0 | 105 | 0.1466 | 1.0 |
| 0.0112 | 36.0 | 108 | 0.1458 | 1.0 |
| 0.0091 | 37.0 | 111 | 0.1457 | 1.0 |
| 0.0098 | 38.0 | 114 | 0.1454 | 1.0 |
| 0.0076 | 39.0 | 117 | 0.1451 | 1.0 |
| 0.0085 | 40.0 | 120 | 0.1448 | 1.0 |
| 0.0079 | 41.0 | 123 | 0.1445 | 1.0 |
| 0.0096 | 42.0 | 126 | 0.1440 | 1.0 |
| 0.0081 | 43.0 | 129 | 0.1430 | 1.0 |
| 0.0083 | 44.0 | 132 | 0.1424 | 1.0 |
| 0.0088 | 45.0 | 135 | 0.1418 | 1.0 |
| 0.0077 | 46.0 | 138 | 0.1414 | 1.0 |
| 0.0073 | 47.0 | 141 | 0.1413 | 1.0 |
| 0.0084 | 48.0 | 144 | 0.1412 | 1.0 |
| 0.0072 | 49.0 | 147 | 0.1411 | 1.0 |
| 0.0077 | 50.0 | 150 | 0.1411 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-7", "results": []}]} | SetFit/distilbert-base-uncased__subj__train-8-7 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3160
- Accuracy: 0.8735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7187 | 1.0 | 3 | 0.6776 | 1.0 |
| 0.684 | 2.0 | 6 | 0.6608 | 1.0 |
| 0.6532 | 3.0 | 9 | 0.6364 | 1.0 |
| 0.5996 | 4.0 | 12 | 0.6119 | 1.0 |
| 0.5242 | 5.0 | 15 | 0.5806 | 1.0 |
| 0.4612 | 6.0 | 18 | 0.5320 | 1.0 |
| 0.4192 | 7.0 | 21 | 0.4714 | 1.0 |
| 0.3274 | 8.0 | 24 | 0.4071 | 1.0 |
| 0.2871 | 9.0 | 27 | 0.3378 | 1.0 |
| 0.2082 | 10.0 | 30 | 0.2822 | 1.0 |
| 0.1692 | 11.0 | 33 | 0.2271 | 1.0 |
| 0.1242 | 12.0 | 36 | 0.1793 | 1.0 |
| 0.0977 | 13.0 | 39 | 0.1417 | 1.0 |
| 0.0776 | 14.0 | 42 | 0.1117 | 1.0 |
| 0.0631 | 15.0 | 45 | 0.0894 | 1.0 |
| 0.0453 | 16.0 | 48 | 0.0733 | 1.0 |
| 0.0399 | 17.0 | 51 | 0.0617 | 1.0 |
| 0.0333 | 18.0 | 54 | 0.0528 | 1.0 |
| 0.0266 | 19.0 | 57 | 0.0454 | 1.0 |
| 0.0234 | 20.0 | 60 | 0.0393 | 1.0 |
| 0.0223 | 21.0 | 63 | 0.0345 | 1.0 |
| 0.0195 | 22.0 | 66 | 0.0309 | 1.0 |
| 0.0161 | 23.0 | 69 | 0.0281 | 1.0 |
| 0.0167 | 24.0 | 72 | 0.0260 | 1.0 |
| 0.0163 | 25.0 | 75 | 0.0242 | 1.0 |
| 0.0134 | 26.0 | 78 | 0.0227 | 1.0 |
| 0.0128 | 27.0 | 81 | 0.0214 | 1.0 |
| 0.0101 | 28.0 | 84 | 0.0204 | 1.0 |
| 0.0109 | 29.0 | 87 | 0.0194 | 1.0 |
| 0.0112 | 30.0 | 90 | 0.0186 | 1.0 |
| 0.0108 | 31.0 | 93 | 0.0179 | 1.0 |
| 0.011 | 32.0 | 96 | 0.0174 | 1.0 |
| 0.0099 | 33.0 | 99 | 0.0169 | 1.0 |
| 0.0083 | 34.0 | 102 | 0.0164 | 1.0 |
| 0.0096 | 35.0 | 105 | 0.0160 | 1.0 |
| 0.01 | 36.0 | 108 | 0.0156 | 1.0 |
| 0.0084 | 37.0 | 111 | 0.0152 | 1.0 |
| 0.0089 | 38.0 | 114 | 0.0149 | 1.0 |
| 0.0073 | 39.0 | 117 | 0.0146 | 1.0 |
| 0.0082 | 40.0 | 120 | 0.0143 | 1.0 |
| 0.008 | 41.0 | 123 | 0.0141 | 1.0 |
| 0.0093 | 42.0 | 126 | 0.0139 | 1.0 |
| 0.0078 | 43.0 | 129 | 0.0138 | 1.0 |
| 0.0086 | 44.0 | 132 | 0.0136 | 1.0 |
| 0.009 | 45.0 | 135 | 0.0135 | 1.0 |
| 0.0072 | 46.0 | 138 | 0.0134 | 1.0 |
| 0.0075 | 47.0 | 141 | 0.0133 | 1.0 |
| 0.0082 | 48.0 | 144 | 0.0133 | 1.0 |
| 0.0068 | 49.0 | 147 | 0.0132 | 1.0 |
| 0.0074 | 50.0 | 150 | 0.0132 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-8", "results": []}]} | SetFit/distilbert-base-uncased__subj__train-8-8 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4865
- Accuracy: 0.778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7024 | 1.0 | 3 | 0.6843 | 0.75 |
| 0.67 | 2.0 | 6 | 0.6807 | 0.5 |
| 0.6371 | 3.0 | 9 | 0.6677 | 0.5 |
| 0.585 | 4.0 | 12 | 0.6649 | 0.5 |
| 0.5122 | 5.0 | 15 | 0.6707 | 0.5 |
| 0.4379 | 6.0 | 18 | 0.6660 | 0.5 |
| 0.4035 | 7.0 | 21 | 0.6666 | 0.5 |
| 0.323 | 8.0 | 24 | 0.6672 | 0.5 |
| 0.2841 | 9.0 | 27 | 0.6534 | 0.5 |
| 0.21 | 10.0 | 30 | 0.6456 | 0.5 |
| 0.1735 | 11.0 | 33 | 0.6325 | 0.5 |
| 0.133 | 12.0 | 36 | 0.6214 | 0.5 |
| 0.0986 | 13.0 | 39 | 0.6351 | 0.5 |
| 0.081 | 14.0 | 42 | 0.6495 | 0.5 |
| 0.0638 | 15.0 | 45 | 0.6671 | 0.5 |
| 0.0449 | 16.0 | 48 | 0.7156 | 0.5 |
| 0.0399 | 17.0 | 51 | 0.7608 | 0.5 |
| 0.0314 | 18.0 | 54 | 0.7796 | 0.5 |
| 0.0243 | 19.0 | 57 | 0.7789 | 0.5 |
| 0.0227 | 20.0 | 60 | 0.7684 | 0.5 |
| 0.0221 | 21.0 | 63 | 0.7628 | 0.5 |
| 0.0192 | 22.0 | 66 | 0.7728 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased__subj__train-8-9", "results": []}]} | SetFit/distilbert-base-uncased__subj__train-8-9 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers | {} | SetFit/distilbert-base-uncased__tweet_eval_stance__all-train | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Setodab/sentencemodel | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Sezai/deneme | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | transformers |
# Small-E-Czech
Small-E-Czech is an [Electra](https://arxiv.org/abs/2003.10555)-small model pretrained on a Czech web corpus created at [Seznam.cz](https://www.seznam.cz/) and introduced in an [IAAI 2022 paper](https://arxiv.org/abs/2112.01810). Like other pretrained models, it should be finetuned on a downstream task of interest before use. At Seznam.cz, it has helped improve [web search ranking](https://blog.seznam.cz/2021/02/vyhledavani-pomoci-vyznamovych-vektoru/), query typo correction or clickbait titles detection. We release it under [CC BY 4.0 license](https://creativecommons.org/licenses/by/4.0/) (i.e. allowing commercial use). To raise an issue, please visit our [github](https://github.com/seznam/small-e-czech).
### How to use the discriminator in transformers
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("Seznam/small-e-czech")
tokenizer = ElectraTokenizerFast.from_pretrained("Seznam/small-e-czech")
sentence = "Za hory, za doly, mé zlaté parohy"
fake_sentence = "Za hory, za doly, kočka zlaté parohy"
fake_sentence_tokens = ["[CLS]"] + tokenizer.tokenize(fake_sentence) + ["[SEP]"]
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
outputs = discriminator(fake_inputs)
predictions = torch.nn.Sigmoid()(outputs[0]).cpu().detach().numpy()
for token in fake_sentence_tokens:
print("{:>7s}".format(token), end="")
print()
for prediction in predictions.squeeze():
print("{:7.1f}".format(prediction), end="")
print()
```
In the output we can see the probabilities of particular tokens not belonging in the sentence (i.e. having been faked by the generator) according to the discriminator:
```
[CLS] za hory , za dol ##y , kočka zlaté paro ##hy [SEP]
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.3 0.2 0.1 0.0
```
### Finetuning
For instructions on how to finetune the model on a new task, see the official HuggingFace transformers [tutorial](https://huggingface.co/transformers/training.html). | {"language": "cs", "license": "cc-by-4.0"} | Seznam/small-e-czech | null | [
"transformers",
"pytorch",
"tf",
"electra",
"cs",
"arxiv:2003.10555",
"arxiv:2112.01810",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Shadaab27/distilroberta-base-finetuned-TeamBHP | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | ShadowKing/Aguante_armin | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Shah92/dspd-assignment-1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
summarization | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mode-bart-deutsch
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the mlsum de dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2152
- Rouge1: 41.698
- Rouge2: 31.3548
- Rougel: 38.2817
- Rougelsum: 39.6349
- Gen Len: 63.1723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"language": "de", "license": "apache-2.0", "tags": ["generated_from_trainer", "summarization"], "datasets": ["mlsum"], "metrics": ["rouge"], "model-index": [{"name": "mode-bart-deutsch", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "mlsum de", "type": "mlsum", "args": "de"}, "metrics": [{"type": "rouge", "value": 41.698, "name": "Rouge1"}]}]}]} | Shahm/bart-german | null | [
"transformers",
"pytorch",
"tensorboard",
"onnx",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"summarization",
"de",
"dataset:mlsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
summarization | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-seven-epoch-base-german
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the mlsum de dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5491
- Rouge1: 42.3787
- Rouge2: 32.0253
- Rougel: 38.9529
- Rougelsum: 40.4544
- Gen Len: 47.7873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"language": "de", "license": "apache-2.0", "tags": ["generated_from_trainer", "summarization"], "datasets": ["mlsum"], "metrics": ["rouge"], "model-index": [{"name": "t5-seven-epoch-base-german", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "mlsum de", "type": "mlsum", "args": "de"}, "metrics": [{"type": "rouge", "value": 42.3787, "name": "Rouge1"}]}]}]} | Shahm/t5-small-german | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"de",
"dataset:mlsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
# Spongebob DialoGPT model | {"tags": ["conversational"]} | Shakaw/DialoGPT-small-spongebot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | transformers | # ChineseBERT-base
This repository contains code, model, dataset for **ChineseBERT** at ACL2021.
paper:
**[ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information](https://arxiv.org/abs/2106.16038)**
*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*
code:
[ChineseBERT github link](https://github.com/ShannonAI/ChineseBert)
## Model description
We propose ChineseBERT, which incorporates both the glyph and pinyin information of Chinese
characters into language model pretraining.
First, for each Chinese character, we get three kind of embedding.
- **Char Embedding:** the same as origin BERT token embedding.
- **Glyph Embedding:** capture visual features based on different fonts of a Chinese character.
- **Pinyin Embedding:** capture phonetic feature from the pinyin sequence ot a Chinese Character.
Then, char embedding, glyph embedding and pinyin embedding
are first concatenated, and mapped to a D-dimensional embedding through a fully
connected layer to form the fusion embedding.
Finally, the fusion embedding is added with the position embedding, which is fed as input to the BERT model.
The following image shows an overview architecture of ChineseBERT model.

ChineseBERT leverages the glyph and pinyin information of Chinese
characters to enhance the model's ability of capturing
context semantics from surface character forms and
disambiguating polyphonic characters in Chinese. | {} | ShannonAI/ChineseBERT-base | null | [
"transformers",
"pytorch",
"arxiv:2106.16038",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | transformers | # ChineseBERT-large
This repository contains code, model, dataset for **ChineseBERT** at ACL2021.
paper:
**[ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information](https://arxiv.org/abs/2106.16038)**
*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*
code:
[ChineseBERT github link](https://github.com/ShannonAI/ChineseBert)
## Model description
We propose ChineseBERT, which incorporates both the glyph and pinyin information of Chinese
characters into language model pretraining.
First, for each Chinese character, we get three kind of embedding.
- **Char Embedding:** the same as origin BERT token embedding.
- **Glyph Embedding:** capture visual features based on different fonts of a Chinese character.
- **Pinyin Embedding:** capture phonetic feature from the pinyin sequence ot a Chinese Character.
Then, char embedding, glyph embedding and pinyin embedding
are first concatenated, and mapped to a D-dimensional embedding through a fully
connected layer to form the fusion embedding.
Finally, the fusion embedding is added with the position embedding, which is fed as input to the BERT model.
The following image shows an overview architecture of ChineseBERT model.

ChineseBERT leverages the glyph and pinyin information of Chinese
characters to enhance the model's ability of capturing
context semantics from surface character forms and
disambiguating polyphonic characters in Chinese. | {} | ShannonAI/ChineseBERT-large | null | [
"transformers",
"pytorch",
"arxiv:2106.16038",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers | {} | Shanny/FinBERT | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers | {} | Shappey/roberta-base-QnA-squad2-trained | null | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
[](https://colab.research.google.com/drive/1dqeUwS_DZ-urrmYzB29nTCBUltwJxhbh?usp=sharing)
# 22 Language Identifier - BERT
This model is trained to identify the following 22 different languages.
- Arabic
- Chinese
- Dutch
- English
- Estonian
- French
- Hindi
- Indonesian
- Japanese
- Korean
- Latin
- Persian
- Portugese
- Pushto
- Romanian
- Russian
- Spanish
- Swedish
- Tamil
- Thai
- Turkish
- Urdu
## Loading the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("SharanSMenon/22-languages-bert-base-cased")
model = AutoModelForSequenceClassification.from_pretrained("SharanSMenon/22-languages-bert-base-cased")
```
## Inference
```python
def predict(sentence):
tokenized = tokenizer(sentence, return_tensors="pt")
outputs = model(**tokenized)
return model.config.id2label[outputs.logits.argmax(dim=1).item()]
```
### Examples
```python
sentence1 = "in war resolution, in defeat defiance, in victory magnanimity"
predict(sentence1) # English
sentence2 = "en la guerra resolución en la derrota desafío en la victoria magnanimidad"
predict(sentence2) # Spanish
sentence3 = "هذا هو أعظم إله على الإطلاق"
predict(sentence3) # Arabic
``` | {"metrics": ["accuracy"], "widget": [{"text": "In war resolution, in defeat defiance, in victory magnanimity"}, {"text": "en la guerra resoluci\u00f3n en la derrota desaf\u00edo en la victoria magnanimidad"}]} | SharanSMenon/22-languages-bert-base-cased | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | SharanSMenon/birds-identifier-325-species | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | ShaswatSheshank/gpt2-wikitext2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
feature-extraction | transformers | {} | Shauli/IE-metric-model-spike | null | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.