Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-10
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-10", "results": []}]} | anas-awadalla/spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-10 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-2
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-2", "results": []}]} | anas-awadalla/spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-2 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-4
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-4", "results": []}]} | anas-awadalla/spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-4 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-6
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-6", "results": []}]} | anas-awadalla/spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-6 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-8
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-8", "results": []}]} | anas-awadalla/spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-8 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | anas-awadalla/spanbert-base-cased-few-shot-k-87599-finetuned-squad-seed-42 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | anasaqsme/anasdistil | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]} | anasaqsme/distilbert-base-uncased-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | anaustinbeing/cords-model | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers |
# XLM-RoBERTa large for QA on Vietnamese languages (also support various languages)
## Overview
- Language model: xlm-roberta-large
- Fine-tune: [deepset/xlm-roberta-large-squad2](https://huggingface.co/deepset/xlm-roberta-large-squad2)
- Language: Vietnamese
- Downstream-task: Extractive QA
- Dataset: [mailong25/bert-vietnamese-question-answering](https://github.com/mailong25/bert-vietnamese-question-answering/tree/master/dataset)
- Training data: train-v2.0.json (SQuAD 2.0 format)
- Eval data: dev-v2.0.json (SQuAD 2.0 format)
- Infrastructure: 1x Tesla P100 (Google Colab)
## Performance
Evaluated on dev-v2.0.json
```
exact: 136 / 141
f1: 0.9692671394799054
```
Evaluated on Vietnamese XQuAD: [xquad.vi.json](https://github.com/deepmind/xquad/blob/master/xquad.vi.json)
```
exact: 604 / 1190
f1: 0.7224454217571596
```
## Author
An Pham (ancs21.ps [at] gmail.com)
## License
MIT | {"language": "vi", "license": "mit", "tags": ["vi", "xlm-roberta"], "metrics": ["f1", "em"], "widget": [{"text": "To\u00e0 nh\u00e0 n\u00e0o cao nh\u1ea5t Vi\u1ec7t Nam?", "context": "Landmark 81 l\u00e0 m\u1ed9t to\u00e0 nh\u00e0 ch\u1ecdc tr\u1eddi trong t\u1ed5 h\u1ee3p d\u1ef1 \u00e1n Vinhomes T\u00e2n C\u1ea3ng, m\u1ed9t d\u1ef1 \u00e1n c\u00f3 t\u1ed5ng m\u1ee9c \u0111\u1ea7u t\u01b0 40.000 t\u1ef7 \u0111\u1ed3ng, do C\u00f4ng ty C\u1ed5 ph\u1ea7n \u0110\u1ea7u t\u01b0 x\u00e2y d\u1ef1ng T\u00e2n Li\u00ean Ph\u00e1t thu\u1ed9c Vingroup l\u00e0m ch\u1ee7 \u0111\u1ea7u t\u01b0. To\u00e0 th\u00e1p cao 81 t\u1ea7ng, hi\u1ec7n t\u1ea1i l\u00e0 to\u00e0 nh\u00e0 cao nh\u1ea5t Vi\u1ec7t Nam v\u00e0 l\u00e0 to\u00e0 nh\u00e0 cao nh\u1ea5t \u0110\u00f4ng Nam \u00c1 t\u1eeb th\u00e1ng 3 n\u0103m 2018."}]} | ancs21/xlm-roberta-large-vi-qa | null | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"vi",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | andebraa/Wind_sentiment_NorBert | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0620
- Precision: 0.9406
- Recall: 0.9463
- F1: 0.9434
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5855 | 1.0 | 878 | 0.0848 | 0.8965 | 0.8980 | 0.8973 | 0.9760 |
| 0.058 | 2.0 | 1756 | 0.0607 | 0.9357 | 0.9379 | 0.9368 | 0.9840 |
| 0.0282 | 3.0 | 2634 | 0.0604 | 0.9354 | 0.9420 | 0.9387 | 0.9852 |
| 0.0148 | 4.0 | 3512 | 0.0606 | 0.9386 | 0.9485 | 0.9435 | 0.9861 |
| 0.0101 | 5.0 | 4390 | 0.0620 | 0.9406 | 0.9463 | 0.9434 | 0.9861 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-base-cased-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9860628716077}}]}]} | andi611/bert-base-cased-ner-conll2003 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1258
- Precision: 0.0269
- Recall: 0.1379
- F1: 0.0451
- Accuracy: 0.1988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 4 | 2.1296 | 0.0270 | 0.1389 | 0.0452 | 0.1942 |
| No log | 2.0 | 8 | 2.1258 | 0.0269 | 0.1379 | 0.0451 | 0.1988 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-base-uncased-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.19881805328292054}}]}]} | andi611/bert-base-uncased-ner-conll2003 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-ner
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0591
- Precision: 0.9465
- Recall: 0.9568
- F1: 0.9517
- Accuracy: 0.9877
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1702 | 1.0 | 878 | 0.0578 | 0.9202 | 0.9347 | 0.9274 | 0.9836 |
| 0.0392 | 2.0 | 1756 | 0.0601 | 0.9306 | 0.9448 | 0.9377 | 0.9851 |
| 0.0157 | 3.0 | 2634 | 0.0517 | 0.9405 | 0.9544 | 0.9474 | 0.9875 |
| 0.0057 | 4.0 | 3512 | 0.0591 | 0.9465 | 0.9568 | 0.9517 | 0.9877 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-large-uncased-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9877039414110284}}]}]} | andi611/bert-large-uncased-ner-conll2003 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-ner-conll2003
This model is a fine-tuned version of [bert-large-uncased-whole-word-masking](https://huggingface.co/bert-large-uncased-whole-word-masking) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0592
- Precision: 0.9527
- Recall: 0.9569
- F1: 0.9548
- Accuracy: 0.9887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4071 | 1.0 | 877 | 0.0584 | 0.9306 | 0.9418 | 0.9362 | 0.9851 |
| 0.0482 | 2.0 | 1754 | 0.0594 | 0.9362 | 0.9491 | 0.9426 | 0.9863 |
| 0.0217 | 3.0 | 2631 | 0.0550 | 0.9479 | 0.9584 | 0.9531 | 0.9885 |
| 0.0103 | 4.0 | 3508 | 0.0592 | 0.9527 | 0.9569 | 0.9548 | 0.9887 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-large-uncased-whole-word-masking-ner-conll2003", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9886888970085945}}]}]} | andi611/bert-large-uncased-whole-word-masking-ner-conll2003 | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"en",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-squad2-with-ner-Pistherea-conll2003-with-neg-with-repeat
This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the conll2003 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"language": ["en"], "license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2", "conll2003"], "model_index": [{"name": "bert-large-uncased-whole-word-masking-squad2-with-ner-Pistherea-conll2003-with-neg-with-repeat", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "args": "conll2003"}}, {"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003"}}]}]} | andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-Pistherea-conll2003-with-neg-with-repeat | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"en",
"dataset:squad_v2",
"dataset:conll2003",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-squad2-with-ner-Pwhatisthe-conll2003-with-neg-with-repeat
This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the conll2003 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"language": ["en"], "license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2", "conll2003"], "model_index": [{"name": "bert-large-uncased-whole-word-masking-squad2-with-ner-Pwhatisthe-conll2003-with-neg-with-repeat", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "args": "conll2003"}}, {"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003"}}]}]} | andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-Pwhatisthe-conll2003-with-neg-with-repeat | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"en",
"dataset:squad_v2",
"dataset:conll2003",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat
This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the conll2003 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"language": ["en"], "license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2", "conll2003"], "model_index": [{"name": "bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "args": "conll2003"}}, {"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003"}}]}]} | andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"en",
"dataset:squad_v2",
"dataset:conll2003",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-squad2-with-ner-mit-movie-with-neg-with-repeat
This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the mit_movie datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"language": ["en"], "license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2", "mit_movie"], "model_index": [{"name": "bert-large-uncased-whole-word-masking-squad2-with-ner-mit-movie-with-neg-with-repeat", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "squad_v2", "type": "squad_v2"}}, {"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "mit_movie", "type": "mit_movie"}}]}]} | andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-mit-movie-with-neg-with-repeat | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"en",
"dataset:squad_v2",
"dataset:mit_movie",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-squad2-with-ner-mit-restaurant-with-neg-with-repeat
This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the mit_restaurant datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"language": ["en"], "license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2", "mit_restaurant"], "model_index": [{"name": "bert-large-uncased-whole-word-masking-squad2-with-ner-mit-restaurant-with-neg-with-repeat", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "squad_v2", "type": "squad_v2"}}, {"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "mit_restaurant", "type": "mit_restaurant"}}]}]} | andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-mit-restaurant-with-neg-with-repeat | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"en",
"dataset:squad_v2",
"dataset:mit_restaurant",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-agnews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ag_news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1652
- Accuracy: 0.9474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1916 | 1.0 | 3375 | 0.1741 | 0.9412 |
| 0.123 | 2.0 | 6750 | 0.1631 | 0.9483 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["ag_news"], "metrics": ["accuracy"], "model_index": [{"name": "distilbert-base-uncased-agnews", "results": [{"dataset": {"name": "ag_news", "type": "ag_news", "args": "default"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9473684210526315}}]}]} | andi611/distilbert-base-uncased-ner-agnews | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:ag_news",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0664
- Precision: 0.9332
- Recall: 0.9423
- F1: 0.9377
- Accuracy: 0.9852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2042 | 1.0 | 878 | 0.0636 | 0.9230 | 0.9253 | 0.9241 | 0.9822 |
| 0.0428 | 2.0 | 1756 | 0.0577 | 0.9286 | 0.9370 | 0.9328 | 0.9841 |
| 0.0199 | 3.0 | 2634 | 0.0606 | 0.9364 | 0.9401 | 0.9383 | 0.9851 |
| 0.0121 | 4.0 | 3512 | 0.0641 | 0.9339 | 0.9380 | 0.9360 | 0.9847 |
| 0.0079 | 5.0 | 4390 | 0.0664 | 0.9332 | 0.9423 | 0.9377 | 0.9852 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-base-uncased-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.985193893275295}}]}]} | andi611/distilbert-base-uncased-ner-conll2003 | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-ner-mit-restaurant
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the mit_restaurant dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3097
- Precision: 0.7874
- Recall: 0.8104
- F1: 0.7988
- Accuracy: 0.9119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 431 | 0.4575 | 0.6220 | 0.6856 | 0.6523 | 0.8650 |
| 1.1705 | 2.0 | 862 | 0.3183 | 0.7747 | 0.7953 | 0.7848 | 0.9071 |
| 0.3254 | 3.0 | 1293 | 0.3163 | 0.7668 | 0.8021 | 0.7841 | 0.9058 |
| 0.2287 | 4.0 | 1724 | 0.3097 | 0.7874 | 0.8104 | 0.7988 | 0.9119 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mit_restaurant"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-base-uncased-ner-mit-restaurant", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "mit_restaurant", "type": "mit_restaurant"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9118988661540467}}]}]} | andi611/distilbert-base-uncased-ner-mit-restaurant | null | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"en",
"dataset:mit_restaurant",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-boolq
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the boolq dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2071
- Accuracy: 0.7315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6506 | 1.0 | 531 | 0.6075 | 0.6681 |
| 0.575 | 2.0 | 1062 | 0.5816 | 0.6978 |
| 0.4397 | 3.0 | 1593 | 0.6137 | 0.7253 |
| 0.2524 | 4.0 | 2124 | 0.8124 | 0.7466 |
| 0.126 | 5.0 | 2655 | 1.1437 | 0.7370 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["boolq"], "metrics": ["accuracy"], "model_index": [{"name": "distilbert-base-uncased-boolq", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "boolq", "type": "boolq", "args": "default"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.7314984709480122}}]}]} | andi611/distilbert-base-uncased-qa-boolq | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:boolq",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-qa-with-ner
This model is a fine-tuned version of [andi611/distilbert-base-uncased-qa](https://huggingface.co/andi611/distilbert-base-uncased-qa) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "model_index": [{"name": "distilbert-base-uncased-qa-with-ner", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}}]}]} | andi611/distilbert-base-uncased-qa-with-ner | null | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-qa
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model_index": [{"name": "distilbert-base-uncased-qa", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "squad", "type": "squad", "args": "plain_text"}}]}]} | andi611/distilbert-base-uncased-squad | null | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad2-with-ner-mit-restaurant-with-neg-with-repeat
This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the squad_v2 and the mit_restaurant datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"language": ["en"], "tags": ["generated_from_trainer"], "datasets": ["squad_v2", "mit_restaurant"], "model_index": [{"name": "distilbert-base-uncased-squad2-with-ner-mit-restaurant-with-neg-with-repeat", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "squad_v2", "type": "squad_v2"}}, {"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "mit_restaurant", "type": "mit_restaurant"}}]}]} | andi611/distilbert-base-uncased-squad2-with-ner-mit-restaurant-with-neg-with-repeat | null | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"en",
"dataset:squad_v2",
"dataset:mit_restaurant",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad2-with-ner-with-neg-with-multi-with-repeat
This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["conll2003"], "model_index": [{"name": "distilbert-base-uncased-squad2-with-ner-with-neg-with-multi-with-repeat", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}}]}]} | andi611/distilbert-base-uncased-squad2-with-ner-with-neg-with-multi-with-repeat | null | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:conll2003",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad2-with-ner-with-neg-with-multi
This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["conll2003"], "model_index": [{"name": "distilbert-base-uncased-squad2-with-ner-with-neg-with-multi", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}}]}]} | andi611/distilbert-base-uncased-squad2-with-ner-with-neg-with-multi | null | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:conll2003",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad2-with-ner-with-neg-with-repeat
This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["conll2003"], "model_index": [{"name": "distilbert-base-uncased-squad2-with-ner-with-neg-with-repeat", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}}]}]} | andi611/distilbert-base-uncased-squad2-with-ner-with-neg-with-repeat | null | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:conll2003",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad2-with-ner-with-neg
This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["conll2003"], "model_index": [{"name": "distilbert-base-uncased-squad2-with-ner-with-neg", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}}]}]} | andi611/distilbert-base-uncased-squad2-with-ner-with-neg | null | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:conll2003",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad2-with-ner
This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["conll2003"], "model_index": [{"name": "distilbert-base-uncased-squad2-with-ner", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}}]}]} | andi611/distilbert-base-uncased-squad2-with-ner | null | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:conll2003",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-ner
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0814
- eval_precision: 0.9101
- eval_recall: 0.9336
- eval_f1: 0.9217
- eval_accuracy: 0.9799
- eval_runtime: 10.2964
- eval_samples_per_second: 315.646
- eval_steps_per_second: 39.529
- epoch: 1.14
- step: 500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "model_index": [{"name": "roberta-base-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}}]}]} | andi611/roberta-base-ner-conll2003 | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
# My Awesome Model
| {"tags": ["conversational"]} | andikarachman/DialoGPT-small-sheldon | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8885
- Mae: 0.4390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1089 | 1.0 | 235 | 0.9027 | 0.4756 |
| 0.9674 | 2.0 | 470 | 0.8885 | 0.4390 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc-en", "results": []}]} | anditya/xlm-roberta-base-finetuned-marc-en | null | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | andreas800/hgf_models | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# andreiliphdpr/bert-base-multilingual-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0423
- Train Accuracy: 0.9869
- Validation Loss: 0.0303
- Validation Accuracy: 0.9913
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 43750, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0423 | 0.9869 | 0.0303 | 0.9913 | 0 |
### Framework versions
- Transformers 4.15.0.dev0
- TensorFlow 2.6.2
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "andreiliphdpr/bert-base-multilingual-uncased-finetuned-cola", "results": []}]} | andreiliphdpr/bert-base-multilingual-uncased-finetuned-cola | null | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# andreiliphdpr/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0015
- Train Accuracy: 0.9995
- Validation Loss: 0.0570
- Validation Accuracy: 0.9915
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 43750, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0399 | 0.9870 | 0.0281 | 0.9908 | 0 |
| 0.0182 | 0.9944 | 0.0326 | 0.9901 | 1 |
| 0.0089 | 0.9971 | 0.0396 | 0.9912 | 2 |
| 0.0040 | 0.9987 | 0.0486 | 0.9918 | 3 |
| 0.0015 | 0.9995 | 0.0570 | 0.9915 | 4 |
### Framework versions
- Transformers 4.15.0.dev0
- TensorFlow 2.6.2
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "andreiliphdpr/distilbert-base-uncased-finetuned-cola", "results": []}]} | andreiliphdpr/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
feature-extraction | transformers |
# SimCLS
SimCLS is a framework for abstractive summarization presented in [SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization](https://arxiv.org/abs/2106.01890).
It is a two-stage approach consisting of a *generator* and a *scorer*. In the first stage, a large pre-trained model for abstractive summarization (the *generator*) is used to generate candidate summaries, whereas, in the second stage, the *scorer* assigns a score to each candidate given the source document. The final summary is the highest-scoring candidate.
This model is the *scorer* trained for summarization of BillSum ([paper](https://arxiv.org/abs/1910.00523), [datasets](https://huggingface.co/datasets/billsum)). It should be used in conjunction with [google/pegasus-billsum](https://huggingface.co/google/pegasus-billsum). See [our Github repository](https://github.com/andrejmiscic/simcls-pytorch) for details on training, evaluation, and usage.
## Usage
```bash
git clone https://github.com/andrejmiscic/simcls-pytorch.git
cd simcls-pytorch
pip3 install torch torchvision torchaudio transformers sentencepiece
```
```python
from src.model import SimCLS, GeneratorType
summarizer = SimCLS(generator_type=GeneratorType.Pegasus,
generator_path="google/pegasus-billsum",
scorer_path="andrejmiscic/simcls-scorer-billsum")
document = "This is a legal document."
summary = summarizer(document)
print(summary)
```
### Results
All of our results are reported together with 95% confidence intervals computed using 10000 iterations of bootstrap. See [SimCLS paper](https://arxiv.org/abs/2106.01890) for a description of baselines.
We believe the discrepancies of Rouge-L scores between the original Pegasus work and our evaluation are due to the computation of the metric. Namely, we use a summary level Rouge-L score.
| System | Rouge-1 | Rouge-2 | Rouge-L\* |
|-----------------|----------------------:|----------------------:|----------------------:|
| Pegasus | 57.31 | 40.19 | 45.82 |
| **Our results** | --- | --- | --- |
| Origin | 56.24, [55.74, 56.74] | 37.46, [36.89, 38.03] | 50.71, [50.19, 51.22] |
| Min | 44.37, [43.85, 44.89] | 25.75, [25.30, 26.22] | 38.68, [38.18, 39.16] |
| Max | 62.88, [62.42, 63.33] | 43.96, [43.39, 44.54] | 57.50, [57.01, 58.00] |
| Random | 54.93, [54.43, 55.43] | 35.42, [34.85, 35.97] | 49.19, [48.68, 49.70] |
| **SimCLS** | 57.49, [57.01, 58.00] | 38.54, [37.98, 39.10] | 51.91, [51.39, 52.43] |
### Citation of the original work
```bibtex
@inproceedings{liu-liu-2021-simcls,
title = "{S}im{CLS}: A Simple Framework for Contrastive Learning of Abstractive Summarization",
author = "Liu, Yixin and
Liu, Pengfei",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-short.135",
doi = "10.18653/v1/2021.acl-short.135",
pages = "1065--1072",
}
```
| {"language": ["en"], "tags": ["simcls"], "datasets": ["billsum"]} | andrejmiscic/simcls-scorer-billsum | null | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"simcls",
"en",
"dataset:billsum",
"arxiv:2106.01890",
"arxiv:1910.00523",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
feature-extraction | transformers |
# SimCLS
SimCLS is a framework for abstractive summarization presented in [SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization](https://arxiv.org/abs/2106.01890).
It is a two-stage approach consisting of a *generator* and a *scorer*. In the first stage, a large pre-trained model for abstractive summarization (the *generator*) is used to generate candidate summaries, whereas, in the second stage, the *scorer* assigns a score to each candidate given the source document. The final summary is the highest-scoring candidate.
This model is the *scorer* trained for summarization of CNN/DailyMail ([paper](https://arxiv.org/abs/1602.06023), [datasets](https://huggingface.co/datasets/cnn_dailymail)). It should be used in conjunction with [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn). See [our Github repository](https://github.com/andrejmiscic/simcls-pytorch) for details on training, evaluation, and usage.
## Usage
```bash
git clone https://github.com/andrejmiscic/simcls-pytorch.git
cd simcls-pytorch
pip3 install torch torchvision torchaudio transformers sentencepiece
```
```python
from src.model import SimCLS, GeneratorType
summarizer = SimCLS(generator_type=GeneratorType.Bart,
generator_path="facebook/bart-large-cnn",
scorer_path="andrejmiscic/simcls-scorer-cnndm")
article = "This is a news article."
summary = summarizer(article)
print(summary)
```
### Results
All of our results are reported together with 95% confidence intervals computed using 10000 iterations of bootstrap. See [SimCLS paper](https://arxiv.org/abs/2106.01890) for a description of baselines.
| System | Rouge-1 | Rouge-2 | Rouge-L |
|------------------|----------------------:|----------------------:|----------------------:|
| BART | 44.16 | 21.28 | 40.90 |
| **SimCLS paper** | --- | --- | --- |
| Origin | 44.39 | 21.21 | 41.28 |
| Min | 33.17 | 11.67 | 30.77 |
| Max | 54.36 | 28.73 | 50.77 |
| Random | 43.98 | 20.06 | 40.94 |
| **SimCLS** | 46.67 | 22.15 | 43.54 |
| **Our results** | --- | --- | --- |
| Origin | 44.41, [44.18, 44.63] | 21.05, [20.80, 21.29] | 41.53, [41.30, 41.75] |
| Min | 33.43, [33.25, 33.62] | 10.97, [10.82, 11.12] | 30.57, [30.40, 30.74] |
| Max | 53.87, [53.67, 54.08] | 29.72, [29.47, 29.98] | 51.13, [50.92, 51.34] |
| Random | 43.94, [43.73, 44.16] | 20.09, [19.86, 20.31] | 41.06, [40.85, 41.27] |
| **SimCLS** | 46.53, [46.32, 46.75] | 22.14, [21.91, 22.37] | 43.56, [43.34, 43.78] |
### Citation of the original work
```bibtex
@inproceedings{liu-liu-2021-simcls,
title = "{S}im{CLS}: A Simple Framework for Contrastive Learning of Abstractive Summarization",
author = "Liu, Yixin and
Liu, Pengfei",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-short.135",
doi = "10.18653/v1/2021.acl-short.135",
pages = "1065--1072",
}
```
| {"language": ["en"], "tags": ["simcls"], "datasets": ["cnn_dailymail"]} | andrejmiscic/simcls-scorer-cnndm | null | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"simcls",
"en",
"dataset:cnn_dailymail",
"arxiv:2106.01890",
"arxiv:1602.06023",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
feature-extraction | transformers |
# SimCLS
SimCLS is a framework for abstractive summarization presented in [SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization](https://arxiv.org/abs/2106.01890).
It is a two-stage approach consisting of a *generator* and a *scorer*. In the first stage, a large pre-trained model for abstractive summarization (the *generator*) is used to generate candidate summaries, whereas, in the second stage, the *scorer* assigns a score to each candidate given the source document. The final summary is the highest-scoring candidate.
This model is the *scorer* trained for summarization of XSum ([paper](https://arxiv.org/abs/1808.08745), [datasets](https://huggingface.co/datasets/xsum)). It should be used in conjunction with [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum). See [our Github repository](https://github.com/andrejmiscic/simcls-pytorch) for details on training, evaluation, and usage.
## Usage
```bash
git clone https://github.com/andrejmiscic/simcls-pytorch.git
cd simcls-pytorch
pip3 install torch torchvision torchaudio transformers sentencepiece
```
```python
from src.model import SimCLS, GeneratorType
summarizer = SimCLS(generator_type=GeneratorType.Pegasus,
generator_path="google/pegasus-xsum",
scorer_path="andrejmiscic/simcls-scorer-xsum")
article = "This is a news article."
summary = summarizer(article)
print(summary)
```
### Results
All of our results are reported together with 95% confidence intervals computed using 10000 iterations of bootstrap. See [SimCLS paper](https://arxiv.org/abs/2106.01890) for a description of baselines.
| System | Rouge-1 | Rouge-2 | Rouge-L |
|------------------|----------------------:|----------------------:|----------------------:|
| Pegasus | 47.21 | 24.56 | 39.25 |
| **SimCLS paper** | --- | --- | --- |
| Origin | 47.10 | 24.53 | 39.23 |
| Min | 40.97 | 19.18 | 33.68 |
| Max | 52.45 | 28.28 | 43.36 |
| Random | 46.72 | 23.64 | 38.55 |
| **SimCLS** | 47.61 | 24.57 | 39.44 |
| **Our results** | --- | --- | --- |
| Origin | 47.16, [46.85, 47.48] | 24.59, [24.25, 24.92] | 39.30, [38.96, 39.62] |
| Min | 41.06, [40.76, 41.34] | 18.30, [18.03, 18.56] | 32.70, [32.42, 32.97] |
| Max | 51.83, [51.53, 52.14] | 28.92, [28.57, 29.26] | 44.02, [43.69, 44.36] |
| Random | 46.47, [46.17, 46.78] | 23.45, [23.13, 23.77] | 38.28, [37.96, 38.60] |
| **SimCLS** | 47.17, [46.87, 47.46] | 23.90, [23.59, 24.23] | 38.96, [38.64, 39.29] |
### Citation of the original work
```bibtex
@inproceedings{liu-liu-2021-simcls,
title = "{S}im{CLS}: A Simple Framework for Contrastive Learning of Abstractive Summarization",
author = "Liu, Yixin and
Liu, Pengfei",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-short.135",
doi = "10.18653/v1/2021.acl-short.135",
pages = "1065--1072",
}
```
| {"language": ["en"], "tags": ["simcls"], "datasets": ["xsum"]} | andrejmiscic/simcls-scorer-xsum | null | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"simcls",
"en",
"dataset:xsum",
"arxiv:2106.01890",
"arxiv:1808.08745",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
translation | transformers | {"language": false, "license": "cc-by-4.0", "tags": ["translation"], "widget": [{"text": "Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua."}]} | andrek/LAT2NOB | null | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"translation",
"no",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | andrepreira/nomeacao_classificacao_word2vec_cbow | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-cased-finetuned-squad", "results": []}]} | andresestevez/bert-base-cased-finetuned-squad | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers | {} | andresestevez/bert-finetuned-squad-accelerate | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | andrewlitv/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {"license": "afl-3.0"} | andrewresh/newandrewreshmodel | null | [
"license:afl-3.0",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | andrex/bot-rick | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | transformers | {} | andreylobach/ru_conversational_cased_L-12_H-768_A-12_pt | null | [
"transformers",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | andreymoisv/test-ru-gpt3 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | andreymoisv/test | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | andriopa/blueBERT-base-finetuned | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | andrliu/wav2vec2-base-timit-demo-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
# Rick and Morty DialoGPT Model | {"tags": ["conversational"]} | anduush/DialoGPT-small-Rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers | # Medical History Model based on ruGPT2 by @sberbank-ai
A simple model for helping medical staff to complete patient's medical histories.
Model used pretrained [sberbank-ai/rugpt3small_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2)
| {"language": ["ru"], "license": "mit", "tags": ["PyTorch", "Transformers"]} | anechaev/ru_med_gpt3sm_based_on_gpt2 | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"PyTorch",
"Transformers",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 583416409
- CO2 Emissions (in grams): 72.26141764997115
## Validation Metrics
- Loss: 1.4701834917068481
- Rouge1: 47.7785
- Rouge2: 24.8518
- RougeL: 40.2231
- RougeLsum: 43.9487
- Gen Len: 18.8029
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/anegi/autonlp-dialogue-summariztion-583416409
``` | {"language": "en", "tags": "autonlp", "datasets": ["anegi/autonlp-data-dialogue-summariztion"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 72.26141764997115} | anegi/autonlp-dialogue-summariztion-583416409 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autonlp",
"en",
"dataset:anegi/autonlp-data-dialogue-summariztion",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 412010597
- CO2 Emissions (in grams): 10.411685187181709
## Validation Metrics
- Loss: 0.12585781514644623
- Accuracy: 0.9475446428571429
- Precision: 0.9454660748256183
- Recall: 0.964424320827943
- AUC: 0.990229573862156
- F1: 0.9548511047070125
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/anel/autonlp-cml-412010597
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("anel/autonlp-cml-412010597", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("anel/autonlp-cml-412010597", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["anel/autonlp-data-cml"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 10.411685187181709} | anel/autonlp-cml-412010597 | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:anel/autonlp-data-cml",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 432211280
- CO2 Emissions (in grams): 8.898145050355591
## Validation Metrics
- Loss: 0.12489336729049683
- Accuracy: 0.9520089285714286
- Precision: 0.9436443331246086
- Recall: 0.9747736093143596
- AUC: 0.9910066767410616
- F1: 0.958956411072224
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/anelnurkayeva/autonlp-covid-432211280
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("anelnurkayeva/autonlp-covid-432211280", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("anelnurkayeva/autonlp-covid-432211280", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["anelnurkayeva/autonlp-data-covid"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 8.898145050355591} | anelnurkayeva/autonlp-covid-432211280 | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:anelnurkayeva/autonlp-data-covid",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# BERT for Patents
BERT for Patents is a model trained by Google on 100M+ patents (not just US patents). It is based on BERT<sub>LARGE</sub>.
If you want to learn more about the model, check out the [blog post](https://cloud.google.com/blog/products/ai-machine-learning/how-ai-improves-patent-analysis), [white paper](https://services.google.com/fh/files/blogs/bert_for_patents_white_paper.pdf) and [GitHub page](https://github.com/google/patents-public-data/blob/master/models/BERT%20for%20Patents.md) containing the original TensorFlow checkpoint.
---
### Projects using this model (or variants of it):
- [Patents4IPPC](https://github.com/ec-jrc/Patents4IPPC) (carried out by [Pi School](https://picampus-school.com/) and commissioned by the [Joint Research Centre (JRC)](https://ec.europa.eu/jrc/en) of the European Commission)
| {"language": ["en"], "license": "apache-2.0", "tags": ["masked-lm", "pytorch"], "metrics": ["perplexity"], "pipeline-tag": "fill-mask", "mask-token": "[MASK]", "widget": [{"text": "The present [MASK] provides a torque sensor that is small and highly rigid and for which high production efficiency is possible."}, {"text": "The present invention relates to [MASK] accessories and pertains particularly to a brake light unit for bicycles."}, {"text": "The present invention discloses a space-bound-free [MASK] and its coordinate determining circuit for determining a coordinate of a stylus pen."}, {"text": "The illuminated [MASK] includes a substantially translucent canopy supported by a plurality of ribs pivotally swingable towards and away from a shaft."}]} | anferico/bert-for-patents | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"fill-mask",
"masked-lm",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
#Monke Messenger DialoGPT Model | {"tags": ["conversational"]} | ange/DialoGPT-medium-Monke | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | angelo/test | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | angggapradiktas/model_1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | transformers | {} | angiquer/twitterko-cha-electra-base-discriminator | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | angiquer/twitterko-cha-electra-base-generator | null | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | transformers | {} | angiquer/twitterko-electra-base-discriminator-large | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | transformers | {} | angiquer/twitterko-electra-base-discriminator | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | angiquer/twitterko-electra-base-generator-large | null | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | angiquer/twitterko-electra-base-generator | null | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | angustay/helloworld | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | angxl/testing | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | anhdungitvn/finbert | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
feature-extraction | transformers | {} | anhtunguyen98/xlm-base-vi-en | null | [
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
feature-extraction | transformers | {} | anhtunguyen98/xlm-base-vi | null | [
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | anilkumar-kanasani/normal-gpt2-after-preproc | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Turkish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Turkish using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from unicode_tr import unicode_tr
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish")
model = Wav2Vec2ForCTC.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish")
model = Wav2Vec2ForCTC.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tbatch["sentence"] = str(unicode_tr(re.sub(chars_to_ignore_regex, "", batch["sentence"])).lower())
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\twith torch.no_grad():
\t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\tpred_ids = torch.argmax(logits, dim=-1)
\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 17.46 %
## Training
unicode_tr package is used for converting sentences to lower case since regular lower() does not work well with Turkish.
Since training data is very limited for Turkish, all data is employed with a K-Fold (k=5) training approach. Best model out of the 5 trainings is uploaded. Training arguments:
--num_train_epochs="30" \\
--per_device_train_batch_size="32" \\
--evaluation_strategy="steps" \\
--activation_dropout="0.055" \\
--attention_dropout="0.094" \\
--feat_proj_dropout="0.04" \\
--hidden_dropout="0.047" \\
--layerdrop="0.041" \\
--learning_rate="2.34e-4" \\
--mask_time_prob="0.082" \\
--warmup_steps="250" \\
All trainings took ~20 hours with a GeForce RTX 3090 Graphics Card. | {"language": "tr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "results": [{"task": {"name": "Speech Recognition", "type": "automatic-speech-recognition"}, "dataset": {"name": "Common Voice tr", "type": "common_voice", "args": "tr"}, "metrics": [{"name": "Test WER", "type": "wer", "value": 17.46}]}]} | aniltrkkn/wav2vec2-large-xlsr-53-turkish | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | anily/distilgpt2-finetuned-wikitext2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-BioclinicalBERT-ADR
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the ade_corpus_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 171 | 0.9441 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["ade_corpus_v2"], "model-index": [{"name": "sagemaker-BioclinicalBERT-ADR", "results": []}]} | anindabitm/sagemaker-BioclinicalBERT-ADR | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:ade_corpus_v2",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2434
- Accuracy: 0.9165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9423 | 1.0 | 500 | 0.2434 | 0.9165 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy"], "model-index": [{"name": "sagemaker-distilbert-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9165, "name": "Accuracy"}]}]}]} | anindabitm/sagemaker-distilbert-emotion | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-qnli
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3194
- Accuracy: 0.9112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3116 | 1.0 | 6547 | 0.2818 | 0.8849 |
| 0.2467 | 2.0 | 13094 | 0.2532 | 0.9001 |
| 0.1858 | 3.0 | 19641 | 0.3194 | 0.9112 |
| 0.1449 | 4.0 | 26188 | 0.4338 | 0.9103 |
| 0.0584 | 5.0 | 32735 | 0.5752 | 0.9052 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "albert-base-v2-finetuned-qnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "qnli"}, "metrics": [{"type": "accuracy", "value": 0.9112209408749771, "name": "Accuracy"}]}]}]} | anirudh21/albert-base-v2-finetuned-qnli | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-rte
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2496
- Accuracy: 0.7581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 249 | 0.5914 | 0.6751 |
| No log | 2.0 | 498 | 0.5843 | 0.7184 |
| 0.5873 | 3.0 | 747 | 0.6925 | 0.7220 |
| 0.5873 | 4.0 | 996 | 1.1613 | 0.7545 |
| 0.2149 | 5.0 | 1245 | 1.2496 | 0.7581 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "albert-base-v2-finetuned-rte", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "rte"}, "metrics": [{"type": "accuracy", "value": 0.7581227436823105, "name": "Accuracy"}]}]}]} | anirudh21/albert-base-v2-finetuned-rte | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-wnli
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6878
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.6878 | 0.5634 |
| No log | 2.0 | 80 | 0.6919 | 0.5634 |
| No log | 3.0 | 120 | 0.6877 | 0.5634 |
| No log | 4.0 | 160 | 0.6984 | 0.4085 |
| No log | 5.0 | 200 | 0.6957 | 0.5211 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "albert-base-v2-finetuned-wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5633802816901409, "name": "Accuracy"}]}]}]} | anirudh21/albert-base-v2-finetuned-wnli | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers | {} | anirudh21/albert-large-v2-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | anirudh21/albert-large-v2-finetuned-mnli | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | anirudh21/albert-large-v2-finetuned-mrpc | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | anirudh21/albert-large-v2-finetuned-qnli | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | anirudh21/albert-large-v2-finetuned-qqp | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-large-v2-finetuned-rte
This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6827
- Accuracy: 0.5487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 18 | 0.6954 | 0.5271 |
| No log | 2.0 | 36 | 0.6860 | 0.5379 |
| No log | 3.0 | 54 | 0.6827 | 0.5487 |
| No log | 4.0 | 72 | 0.7179 | 0.5235 |
| No log | 5.0 | 90 | 0.7504 | 0.5379 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "albert-large-v2-finetuned-rte", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "rte"}, "metrics": [{"type": "accuracy", "value": 0.5487364620938628, "name": "Accuracy"}]}]}]} | anirudh21/albert-large-v2-finetuned-rte | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers | {} | anirudh21/albert-large-v2-finetuned-sst2 | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-large-v2-finetuned-wnli
This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6919
- Accuracy: 0.5352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 17 | 0.7292 | 0.4366 |
| No log | 2.0 | 34 | 0.6919 | 0.5352 |
| No log | 3.0 | 51 | 0.7084 | 0.4648 |
| No log | 4.0 | 68 | 0.7152 | 0.5352 |
| No log | 5.0 | 85 | 0.7343 | 0.5211 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "albert-large-v2-finetuned-wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5352112676056338, "name": "Accuracy"}]}]}]} | anirudh21/albert-large-v2-finetuned-wnli | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | anirudh21/albert-xlarge-v2-finetuned-mnli | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-xlarge-v2-finetuned-mrpc
This model is a fine-tuned version of [albert-xlarge-v2](https://huggingface.co/albert-xlarge-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5563
- Accuracy: 0.7132
- F1: 0.8146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 63 | 0.6898 | 0.5221 | 0.6123 |
| No log | 2.0 | 126 | 0.6298 | 0.6838 | 0.8122 |
| No log | 3.0 | 189 | 0.6043 | 0.7010 | 0.8185 |
| No log | 4.0 | 252 | 0.5834 | 0.7010 | 0.8146 |
| No log | 5.0 | 315 | 0.5563 | 0.7132 | 0.8146 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "albert-xlarge-v2-finetuned-mrpc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.7132352941176471, "name": "Accuracy"}, {"type": "f1", "value": 0.8145800316957211, "name": "F1"}]}]}]} | anirudh21/albert-xlarge-v2-finetuned-mrpc | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-xlarge-v2-finetuned-wnli
This model is a fine-tuned version of [albert-xlarge-v2](https://huggingface.co/albert-xlarge-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6869
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.6906 | 0.5070 |
| No log | 2.0 | 80 | 0.6869 | 0.5634 |
| No log | 3.0 | 120 | 0.6905 | 0.5352 |
| No log | 4.0 | 160 | 0.6960 | 0.4225 |
| No log | 5.0 | 200 | 0.7011 | 0.3803 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "albert-xlarge-v2-finetuned-wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5633802816901409, "name": "Accuracy"}]}]}]} | anirudh21/albert-xlarge-v2-finetuned-wnli | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | anirudh21/albert-xxlarge-v2-finetuned-cola | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | anirudh21/albert-xxlarge-v2-finetuned-mrpc | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | anirudh21/albert-xxlarge-v2-finetuned-qnli | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | anirudh21/albert-xxlarge-v2-finetuned-qqp | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | anirudh21/albert-xxlarge-v2-finetuned-rte | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | anirudh21/albert-xxlarge-v2-finetuned-sst2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-xxlarge-v2-finetuned-wnli
This model is a fine-tuned version of [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6970
- Accuracy: 0.5070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 13 | 0.8066 | 0.4366 |
| No log | 2.0 | 26 | 0.6970 | 0.5070 |
| No log | 3.0 | 39 | 0.7977 | 0.4507 |
| No log | 4.0 | 52 | 0.7906 | 0.4930 |
| No log | 5.0 | 65 | 0.8459 | 0.4366 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "albert-xxlarge-v2-finetuned-wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5070422535211268, "name": "Accuracy"}]}]}]} | anirudh21/albert-xxlarge-v2-finetuned-wnli | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9664
- Matthews Correlation: 0.5797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5017 | 1.0 | 535 | 0.5252 | 0.4841 |
| 0.2903 | 2.0 | 1070 | 0.5550 | 0.4967 |
| 0.1839 | 3.0 | 1605 | 0.7295 | 0.5634 |
| 0.1132 | 4.0 | 2140 | 0.7762 | 0.5702 |
| 0.08 | 5.0 | 2675 | 0.9664 | 0.5797 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "bert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5796941781913538, "name": "Matthews Correlation"}]}]}]} | anirudh21/bert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.