modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 21
values | files
sequence | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
textattack/albert-base-v2-WNLI | 2020-07-06T16:33:17.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"train_args.json"
] | textattack | 22 | transformers | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 2e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.5915492957746479, as measured by the
eval set accuracy, found after 0 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/albert-base-v2-ag-news | 2020-07-07T21:59:15.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"train_args.json"
] | textattack | 73 | transformers | ## TextAttack Model CardThis `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the ag_news dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9471052631578948, as measured by the
eval set accuracy, found after 3 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/albert-base-v2-imdb | 2020-07-06T16:34:24.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"train_args.json"
] | textattack | 252 | transformers | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the imdb dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.89236, as measured by the
eval set accuracy, found after 3 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/albert-base-v2-rotten-tomatoes | 2020-07-06T16:35:34.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"train_args.json"
] | textattack | 52 | transformers | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8808630393996247, as measured by the
eval set accuracy, found after 1 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/albert-base-v2-rotten_tomatoes | 2020-06-25T20:00:46.000Z | [
"pytorch",
"tensorboard",
"albert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"events.out.tfevents.1593060127.qcuda1",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"train_args.json",
"1593060127.506348/events.out.tfevents.1593060127.qcuda1"
] | textattack | 31 | transformers | ## albert-base-v2 fine-tuned with TextAttack on the rotten_tomatoes dataset
This `albert-base-v2` model was fine-tuned for sequence classificationusing TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 10 epochs with a batch size of 128, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8855534709193246, as measured by the
eval set accuracy, found after 1 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/albert-base-v2-snli | 2020-07-06T16:36:47.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"train_args.json"
] | textattack | 27 | transformers | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the snli dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 2e-05, and a maximum sequence length of 64.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9060150375939849, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/albert-base-v2-yelp-polarity | 2020-07-06T16:37:10.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"train_args.json"
] | textattack | 992 | transformers | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the yelp_polarity dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 3e-05, and a maximum sequence length of 512.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.975078947368421, as measured by the
eval set accuracy, found after 3 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/bert-base-cased-STS-B | 2021-05-20T07:30:08.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"vocab.txt"
] | textattack | 43 | transformers | ## TextAttack Model Card
This `bert-base-cased` model was fine-tuned for sequence classificationusing TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 3 epochs with a batch size of 128, a learning
rate of 1e-05, and a maximum sequence length of 128.
Since this was a regression task, the model was trained with a mean squared error loss function.
The best score the model achieved on this task was 0.8244429996636282, as measured by the
eval set pearson correlation, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/bert-base-uncased-CoLA | 2021-05-20T07:31:05.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_cola.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | textattack | 1,777 | transformers | |
textattack/bert-base-uncased-MNLI | 2021-05-20T07:31:58.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | textattack | 993 | transformers | |
textattack/bert-base-uncased-MRPC | 2021-05-20T07:32:52.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results_mrpc.txt",
"flax_model.msgpack",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"training_args.bin",
"vocab.txt"
] | textattack | 371 | transformers | ## TextAttack Model Card
This `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 2e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8774509803921569, as measured by the
eval set accuracy, found after 1 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/bert-base-uncased-QNLI | 2021-05-20T07:33:46.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_qnli.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | textattack | 60 | transformers | |
textattack/bert-base-uncased-QQP | 2021-05-20T07:34:46.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_qqp.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | textattack | 601 | transformers | |
textattack/bert-base-uncased-RTE | 2021-05-20T07:36:18.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results_rte.txt",
"flax_model.msgpack",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"training_args.bin",
"vocab.txt"
] | textattack | 1,014 | transformers | ## TextAttack Model Card
This `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 8, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.7256317689530686, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/bert-base-uncased-SST-2 | 2021-05-20T07:37:12.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_sst-2.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | textattack | 5,320 | transformers | |
textattack/bert-base-uncased-STS-B | 2021-05-20T07:38:28.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_sts-b.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | textattack | 56 | transformers | |
textattack/bert-base-uncased-WNLI | 2021-05-20T07:39:22.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results_wnli.txt",
"flax_model.msgpack",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"training_args.bin",
"vocab.txt"
] | textattack | 59 | transformers | ## TextAttack Model Card
This `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 5e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.5633802816901409, as measured by the
eval set accuracy, found after 1 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/bert-base-uncased-ag-news | 2021-05-20T07:40:21.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"vocab.txt"
] | textattack | 1,747 | transformers | ## TextAttack Model CardThis `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the ag_news dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 3e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9514473684210526, as measured by the
eval set accuracy, found after 3 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/bert-base-uncased-imdb | 2021-05-20T07:42:02.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"vocab.txt"
] | textattack | 2,729 | transformers | ## TextAttack Model Card
This `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the imdb dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.89088, as measured by the
eval set accuracy, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/bert-base-uncased-rotten-tomatoes | 2021-05-20T07:46:20.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"vocab.txt"
] | textattack | 1,890 | transformers | ## TextAttack Model Card
This `bert-base-uncased` model was fine-tuned for sequence classificationusing TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 10 epochs with a batch size of 16, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.875234521575985, as measured by the
eval set accuracy, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/bert-base-uncased-rotten_tomatoes | 2021-05-20T07:47:13.000Z | [
"pytorch",
"jax",
"tensorboard",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"events.out.tfevents.1593052540.qcuda11",
"flax_model.msgpack",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"vocab.txt",
"1593052540.6245422/events.out.tfevents.1593052540.qcuda11"
] | textattack | 35 | transformers | ## bert-base-uncased fine-tuned with TextAttack on the rotten_tomatoes dataset
This `bert-base-uncased` model was fine-tuned for sequence classificationusing TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 10 epochs with a batch size of 64, a learning
rate of 5e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.875234521575985, as measured by the
eval set accuracy, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/bert-base-uncased-snli | 2021-05-20T07:48:06.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"train_args.json",
"vocab.txt"
] | textattack | 1,202 | transformers | |
textattack/bert-base-uncased-yelp-polarity | 2021-05-20T07:49:07.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"vocab.txt"
] | textattack | 408 | transformers | ## TextAttack Model Card
This `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the yelp_polarity dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 5e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9699473684210527, as measured by the
eval set accuracy, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/distilbert-base-cased-CoLA | 2020-06-09T16:45:43.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_cola.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | textattack | 314 | transformers | |
textattack/distilbert-base-cased-MRPC | 2020-06-09T16:46:01.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_mrpc.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | textattack | 35 | transformers | |
textattack/distilbert-base-cased-QQP | 2020-06-09T16:46:12.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_qqp.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | textattack | 35 | transformers | |
textattack/distilbert-base-cased-SST-2 | 2020-06-09T16:46:25.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_sst-2.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | textattack | 302 | transformers | |
textattack/distilbert-base-cased-STS-B | 2020-06-09T16:46:42.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_sts-b.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | textattack | 26 | transformers | |
textattack/distilbert-base-cased-snli | 2020-07-06T16:37:00.000Z | [
"pytorch",
"distilbert",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"vocab.txt"
] | textattack | 96 | transformers | ## TextAttack Model Card
This `distilbert-base-cased` model was fine-tuned for sequence classificationusing TextAttack
and the snli dataset loaded using the `nlp` library. The model was fine-tuned
for 3 epochs with a batch size of 256, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8768542979069295, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
|
textattack/distilbert-base-uncased-CoLA | 2020-07-06T16:29:03.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results_cola.txt",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"training_args.bin",
"vocab.txt"
] | textattack | 1,967 | transformers | ## TextAttack Model Cardand the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 3e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8235858101629914, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/distilbert-base-uncased-MNLI | 2020-06-09T16:47:05.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | textattack | 44 | transformers | |
textattack/distilbert-base-uncased-MRPC | 2020-07-06T16:30:12.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results_mrpc.txt",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"training_args.bin",
"vocab.txt"
] | textattack | 61 | transformers | ## TextAttack Model Card
This `distilbert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 2e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8578431372549019, as measured by the
eval set accuracy, found after 1 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/distilbert-base-uncased-QNLI | 2020-06-09T16:47:34.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_qnli.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | textattack | 55 | transformers | |
textattack/distilbert-base-uncased-QQP | 2020-06-09T16:47:45.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_qqp.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | textattack | 28 | transformers | |
textattack/distilbert-base-uncased-RTE | 2020-07-06T16:31:28.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results_rte.txt",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"training_args.bin",
"vocab.txt"
] | textattack | 345 | transformers | ## TextAttack Model Card
This `distilbert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.6570397111913358, as measured by the
eval set accuracy, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/distilbert-base-uncased-SST-2 | 2020-06-09T16:48:10.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_sst-2.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | textattack | 46 | transformers | |
textattack/distilbert-base-uncased-STS-B | 2020-06-09T16:48:25.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_sts-b.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | textattack | 26 | transformers | |
textattack/distilbert-base-uncased-WNLI | 2020-07-06T16:33:44.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results_wnli.txt",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"training_args.bin",
"vocab.txt"
] | textattack | 45 | transformers | ## TextAttack Model Card
This `distilbert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 128, a learning
rate of 2e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.5633802816901409, as measured by the
eval set accuracy, found after 0 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/distilbert-base-uncased-ag-news | 2020-07-07T22:01:14.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"vocab.txt"
] | textattack | 262 | transformers | ## TextAttack Model CardThis `distilbert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the ag_news dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9478947368421052, as measured by the
eval set accuracy, found after 1 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/distilbert-base-uncased-imdb | 2020-07-06T16:34:50.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"vocab.txt"
] | textattack | 375 | transformers | ## TextAttack Model Card
This `distilbert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the imdb dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.88, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/distilbert-base-uncased-rotten-tomatoes | 2020-07-06T16:36:02.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"vocab.txt"
] | textattack | 380 | transformers | ## TextAttack Model Card
This `distilbert-base-uncased` model was fine-tuned for sequence classificationusing TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 3 epochs with a batch size of 128, a learning
rate of 1e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8395872420262664, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/facebook-bart-base-RTE | 2020-08-20T15:50:48.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"log.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"vocab.json"
] | textattack | 23 | transformers | ## TextAttack Model CardSince this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.7256317689530686, as measured by the
eval set accuracy, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/facebook-bart-base-glue-RTE | 2020-08-20T15:49:05.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"log.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"vocab.json"
] | textattack | 19 | transformers | ## TextAttack Model Cardrate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.7256317689530686, as measured by the
eval set accuracy, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/facebook-bart-large-CoLA | 2020-06-09T16:49:04.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_cola.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | textattack | 342 | transformers | |
textattack/facebook-bart-large-MNLI | 2020-06-09T16:49:34.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | textattack | 21 | transformers | |
textattack/facebook-bart-large-MRPC | 2020-06-09T16:49:43.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_mrpc.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | textattack | 68 | transformers | |
textattack/facebook-bart-large-QNLI | 2020-06-09T16:50:26.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_qnli.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | textattack | 48 | transformers | |
textattack/facebook-bart-large-RTE | 2020-06-09T16:50:55.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_rte.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | textattack | 310 | transformers | |
textattack/facebook-bart-large-SST-2 | 2020-06-09T16:51:43.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_sst-2.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | textattack | 473 | transformers | |
textattack/facebook-bart-large-WNLI | 2020-06-09T16:52:24.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_wnli.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | textattack | 9 | transformers | |
textattack/roberta-base-CoLA | 2021-05-20T22:05:35.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results_cola.txt",
"flax_model.msgpack",
"log.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"training_args.bin",
"vocab.json"
] | textattack | 2,101 | transformers | ## TextAttack Model Cardand the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.850431447746884, as measured by the
eval set accuracy, found after 1 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/roberta-base-MNLI | 2021-05-20T22:06:43.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | textattack | 587 | transformers | |
textattack/roberta-base-MRPC | 2021-05-20T22:07:47.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results_mrpc.txt",
"flax_model.msgpack",
"log.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"training_args.bin",
"vocab.json"
] | textattack | 296 | transformers | ## TextAttack Model Card
This `roberta-base` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 3e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9117647058823529, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/roberta-base-QNLI | 2021-05-20T22:09:33.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_qnli.txt",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | textattack | 30 | transformers | |
textattack/roberta-base-RTE | 2021-05-20T22:10:37.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results_rte.txt",
"flax_model.msgpack",
"log.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"training_args.bin",
"vocab.json"
] | textattack | 457 | transformers | ## TextAttack Model Card
This `roberta-base` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.7942238267148014, as measured by the
eval set accuracy, found after 3 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/roberta-base-SST-2 | 2021-05-20T22:11:39.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_sst-2.txt",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | textattack | 17,055 | transformers | |
textattack/roberta-base-STS-B | 2021-05-20T22:12:47.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results_sts-b.txt",
"flax_model.msgpack",
"log.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"training_args.bin",
"vocab.json"
] | textattack | 78 | transformers | ## TextAttack Model Card
This `roberta-base` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 8, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a regression task, the model was trained with a mean squared error loss function.
The best score the model achieved on this task was 0.9108696741479216, as measured by the
eval set pearson correlation, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/roberta-base-WNLI | 2021-05-20T22:13:50.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results_wnli.txt",
"flax_model.msgpack",
"log.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"training_args.bin",
"vocab.json"
] | textattack | 48 | transformers | ## TextAttack Model Card
This `roberta-base` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 5e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.5633802816901409, as measured by the
eval set accuracy, found after 0 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/roberta-base-ag-news | 2021-05-20T22:15:20.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"log.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"vocab.json"
] | textattack | 203 | transformers | ## TextAttack Model CardThis `roberta-base` model was fine-tuned for sequence classification using TextAttack
and the ag_news dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 5e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9469736842105263, as measured by the
eval set accuracy, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/roberta-base-imdb | 2021-05-20T22:16:19.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"log.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"vocab.json"
] | textattack | 767 | transformers | ## TextAttack Model Card
This `roberta-base` model was fine-tuned for sequence classification using TextAttack
and the imdb dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 3e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.91436, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/roberta-base-rotten-tomatoes | 2021-05-20T22:17:29.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"log.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"vocab.json"
] | textattack | 37 | transformers | ## TextAttack Model Card
This `roberta-base` model was fine-tuned for sequence classificationusing TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 10 epochs with a batch size of 64, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9033771106941839, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/roberta-base-rotten_tomatoes | 2021-05-20T22:18:23.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"events.out.tfevents.1593104969.qcuda8",
"flax_model.msgpack",
"log.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_args.json",
"vocab.json",
"1593104969.029003/events.out.tfevents.1593104969.qcuda8"
] | textattack | 28 | transformers | ## roberta-base fine-tuned with TextAttack on the rotten_tomatoes dataset
This `roberta-base` model was fine-tuned for sequence classificationusing TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 10 epochs with a batch size of 128, a learning
rate of 5e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9033771106941839, as measured by the
eval set accuracy, found after 9 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/xlnet-base-cased-CoLA | 2020-07-06T16:29:34.000Z | [
"pytorch",
"xlnet",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"eval_results_cola.txt",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"train_args.json",
"training_args.bin"
] | textattack | 145 | transformers | ## TextAttack Model Cardfor 5 epochs with a batch size of 32, a learning
rate of 3e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.7976989453499521, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/xlnet-base-cased-MNLI | 2020-06-09T16:55:37.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
] | textattack | 355 | transformers | |
textattack/xlnet-base-cased-MRPC | 2020-07-06T16:30:46.000Z | [
"pytorch",
"xlnet",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"eval_results_mrpc.txt",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"train_args.json",
"training_args.bin"
] | textattack | 45 | transformers | ## TextAttack Model Card
This `xlnet-base-cased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 5e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8897058823529411, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/xlnet-base-cased-QNLI | 2020-06-09T16:56:10.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_qnli.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
] | textattack | 40 | transformers | |
textattack/xlnet-base-cased-QQP | 2020-06-09T16:56:26.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_qqp.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
] | textattack | 21 | transformers | |
textattack/xlnet-base-cased-RTE | 2020-07-06T16:32:05.000Z | [
"pytorch",
"xlnet",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"eval_results_rte.txt",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"train_args.json",
"training_args.bin"
] | textattack | 103 | transformers | ## TextAttack Model Card
This `xlnet-base-cased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.7111913357400722, as measured by the
eval set accuracy, found after 3 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/xlnet-base-cased-SST-2 | 2020-06-09T16:56:53.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_sst-2.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
] | textattack | 136 | transformers | |
textattack/xlnet-base-cased-STS-B | 2020-07-06T16:33:08.000Z | [
"pytorch",
"xlnet",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"eval_results_sts-b.txt",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"train_args.json",
"training_args.bin"
] | textattack | 65 | transformers | ## TextAttack Model Card
This `xlnet-base-cased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 8, a learning
rate of 5e-05, and a maximum sequence length of 128.
Since this was a regression task, the model was trained with a mean squared error loss function.
The best score the model achieved on this task was 0.8892630070017784, as measured by the
eval set pearson correlation, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/xlnet-base-cased-WNLI | 2020-07-06T16:34:15.000Z | [
"pytorch",
"xlnet",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"eval_results_wnli.txt",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"train_args.json",
"training_args.bin"
] | textattack | 39 | transformers | ## TextAttack Model Card
This `xlnet-base-cased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 3e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.5774647887323944, as measured by the
eval set accuracy, found after 0 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/xlnet-base-cased-imdb | 2020-07-06T16:35:25.000Z | [
"pytorch",
"xlnet",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"train_args.json"
] | textattack | 83 | transformers | ## TextAttack Model Card
This `xlnet-base-cased` model was fine-tuned for sequence classification using TextAttack
and the imdb dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 2e-05, and a maximum sequence length of 512.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.95352, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/xlnet-base-cased-rotten-tomatoes | 2020-07-06T16:36:38.000Z | [
"pytorch",
"xlnet",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"log.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"train_args.json"
] | textattack | 33 | transformers | ## TextAttack Model Card
This `xlnet-base-cased` model was fine-tuned for sequence classification using TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9071294559099438, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/xlnet-large-cased-CoLA | 2020-06-09T16:57:33.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_cola.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
] | textattack | 33 | transformers | |
textattack/xlnet-large-cased-MRPC | 2020-06-09T16:58:10.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_mrpc.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
] | textattack | 22 | transformers | |
textattack/xlnet-large-cased-QQP | 2020-06-09T16:58:40.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_qqp.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
] | textattack | 16 | transformers | |
textattack/xlnet-large-cased-SST-2 | 2020-06-09T16:59:05.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_sst-2.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
] | textattack | 34 | transformers | |
textattack/xlnet-large-cased-STS-B | 2020-06-09T16:59:30.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_sts-b.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
] | textattack | 34 | transformers | |
thatdramebaazguy/movie-roberta-MITmovie-squad | 2021-05-20T22:20:09.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | thatdramebaazguy | 11 | transformers | |
thatdramebaazguy/movie-roberta-MITmovie | 2021-05-20T22:21:20.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | thatdramebaazguy | 15 | transformers | Model Card coming soon! |
thatdramebaazguy/movie-roberta-base | 2021-05-20T22:22:56.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"English",
"dataset:imdb",
"dataset:cornell_movie_dialogue",
"transformers",
"roberta-base",
"masked-language-modeling",
"license:cc-by-4.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | thatdramebaazguy | 33 | transformers | ---
datasets:
- imdb
- cornell_movie_dialogue
language:
- English
thumbnail:
tags:
- roberta
- roberta-base
- masked-language-modeling
- masked-lm
license: cc-by-4.0
---
# roberta-base for MLM
```
model_name = "thatdramebaazguy/movie-roberta-base"
pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="Fill-Mask")
```
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** Fill-Mask
**Training data:** imdb, polarity movie data, cornell_movie_dialogue, 25mlens movie names
**Eval data:** imdb, polarity movie data, cornell_movie_dialogue, 25mlens movie names
**Infrastructure**: 4x Tesla v100
**Code:** See [example](https://github.com/adityaarunsinghal/Domain-Adaptation/blob/master/scripts/shell_scripts/train_movie_roberta.sh)
## Hyperparameters
```
Num examples = 4767233
Num Epochs = 2
Instantaneous batch size per device = 20
Total train batch size (w. parallel, distributed & accumulation) = 80
Gradient Accumulation steps = 1
Total optimization steps = 119182
eval_loss = 1.6153
eval_samples = 20573
perplexity = 5.0296
learning_rate=5e-05
n_gpu = 4
```
## Performance
perplexity = 5.0296
Some of my work:
- [Domain-Adaptation Project](https://github.com/adityaarunsinghal/Domain-Adaptation/)
---
|
thatdramebaazguy/movie-roberta-squad | 2021-05-20T22:24:26.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | thatdramebaazguy | 7 | transformers | Model Card coming soon! |
thatdramebaazguy/roberta-base-MITmovie-squad | 2021-05-20T22:25:26.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | thatdramebaazguy | 10 | transformers | |
thatdramebaazguy/roberta-base-MITmovie | 2021-05-20T22:27:12.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | thatdramebaazguy | 16 | transformers | Model Card coming soon! |
thatdramebaazguy/roberta-base-squad | 2021-05-20T22:28:27.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"question-answering",
"English",
"dataset:squad",
"dataset:squad-v1",
"transformers",
"roberta-base",
"masked-language-modeling",
"masked-LM"
] | question-answering | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | thatdramebaazguy | 8 | transformers | ---
datasets:
- squad
- squad-v1
language:
- English
thumbnail:
tags:
- roberta
- roberta-base
- masked-language-modeling
- masked-LM
---
Model Card coming soon!
|
thatdramebaazguy/roberta-base-wikimovies | 2021-05-20T22:29:54.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"English",
"dataset:wikimovies",
"transformers",
"roberta-base",
"masked-language-modeling",
"license:cc-by-4.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | thatdramebaazguy | 7 | transformers | ---
datasets:
- wikimovies
language:
- English
thumbnail:
tags:
- roberta
- roberta-base
- masked-language-modeling
license: cc-by-4.0
---
# roberta-base for MLM
```
model_name = "thatdramebaazguy/roberta-base-wikimovies"
pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="Fill-Mask")
```
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** Fill-Mask
**Training data:** wikimovies
**Eval data:** wikimovies
**Infrastructure**: 2x Tesla v100
**Code:** See [example](https://github.com/adityaarunsinghal/Domain-Adaptation/blob/master/shell_scripts/train_movie_roberta.sh)
## Hyperparameters
```
num_examples = 4346
batch_size = 16
n_epochs = 3
base_LM_model = "roberta-base"
learning_rate = 5e-05
max_query_length=64
Gradient Accumulation steps = 1
Total optimization steps = 816
evaluation_strategy=IntervalStrategy.NO
prediction_loss_only=False
per_device_train_batch_size=8
per_device_eval_batch_size=8
adam_beta1=0.9
adam_beta2=0.999
adam_epsilon=1e-08,
max_grad_norm=1.0
lr_scheduler_type=SchedulerType.LINEAR
warmup_ratio=0.0
seed=42
eval_steps=500
metric_for_best_model=None
greater_is_better=None
label_smoothing_factor=0.0
```
## Performance
perplexity = 4.3808
Some of my work:
- [Domain-Adaptation Project](https://github.com/adityaarunsinghal/Domain-Adaptation/)
---
|
theainerd/Wav2Vec2-large-xlsr-hindi | 2021-03-29T07:14:33.000Z | [
"pytorch",
"wav2vec2",
"hi",
"dataset:Interspeech 2021",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | theainerd | 344 | transformers | ---
language: hi
datasets:
- Interspeech 2021
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Hindi by Shyam Sunder Kumar
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice hi
type: common_voice
args: hi
metrics:
- name: Test WER
type: wer
value: 72.62
---
# Wav2Vec2-Large-XLSR-53-hindi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) hindi using the [Multilingual and code-switching ASR challenges for low resource Indian languages](https://navana-tech.github.io/IS21SS-indicASRchallenge/data.html).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the hindi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 72.62 %
## Training
The script used for training can be found [Hindi ASR Fine Tuning Wav2Vec2](https://colab.research.google.com/drive/1m-F7et3CHT_kpFqg7UffTIwnUV9AKgrg?usp=sharing) |
theainerd/wav2vec2-large-xlsr-53-odia | 2021-03-24T08:43:37.000Z | [
"pytorch",
"wav2vec2",
"or",
"dataset:OpenSLR",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | theainerd | 10 | transformers | ---
language: or
datasets:
- OpenSLR
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Odia by Shyam Sunder Kumar
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR
type: OpenSLR
args: or
metrics:
- name: Test WER
type: wer
value: 68.75
---
# Wav2Vec2-Large-XLSR-53-Odia
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) odia using the [Multilingual and code-switching ASR challenges for low resource Indian languages](https://navana-tech.github.io/IS21SS-indicASRchallenge/data.html).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "or", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("theainerd/wav2vec2-large-xlsr-53-odia")
model = Wav2Vec2ForCTC.from_pretrained("theainerd/wav2vec2-large-xlsr-53-odia")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Odia test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "or", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("theainerd/wav2vec2-large-xlsr-53-odia")
model = Wav2Vec2ForCTC.from_pretrained("theainerd/wav2vec2-large-xlsr-53-odia")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 68.75 %
## Training
The script used for training can be found [Odia ASR Fine Tuning Wav2Vec2](https://colab.research.google.com/drive/1aHpFRTxaBeNblRHAtYOy0hBeXbbMWtot?usp=sharing) |
thilina/mt5-sinhalese-english | 2021-01-03T21:14:26.000Z | [
"pytorch",
"tf",
"mt5",
"seq2seq",
"si",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin"
] | thilina | 58 | transformers | ---
language:
- si
- en
tags:
- translation
license: apache-2.0
metrics:
- sacrebleu
---
# mt5-sinhalese-english
## Model description
An mT5-base model fine-tuned on the Sinhalese-English dataset in the Tatoeba Challenge. Can be used to translate from Sinhalese to English and vice versa.
## Training details
- English - Sinhala dataset from the Tatoeba Challenge [Datasets](https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/Data.md)
- [mT5-base pre-trained weights](https://huggingface.co/google/mt5-base)
## Eval results
SacreBLEU score:
- English to Sinhalese: 10.3
- Sinhalese to English: 24.4 |
thingsu/koDPR_context | 2021-05-24T02:46:37.000Z | [
"pytorch",
"bert",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin"
] | thingsu | 204 | transformers | fintuned the kykim/bert-kor-base model as a dense passage retrieval context encoder by KLUE dataset
this link is experiment result. https://wandb.ai/thingsu/DenseRetrieval
Corpus : Korean Wikipedia Corpus
Trained Strategy :
- Pretrained Model : kykim/bert-kor-base
- Inverse Cloze Task : 16 Epoch, by korquad v 1.0, KLUE MRC dataset
- In-batch Negatives : 12 Epoch, by KLUE MRC dataset, random sampling between Sparse Retrieval(TF-IDF) top 100 passage per each query
You must need to use Korean wikipedia corpus
<pre>
<code>
from Transformers import AutoTokenizer, BertPreTrainedModel, BertModel
class BertEncoder(BertPreTrainedModel):
def __init__(self, config):
super(BertEncoder, self).__init__(config)
self.bert = BertModel(config)
self.init_weights()
def forward(self, input_ids, attention_mask=None, token_type_ids=None):
outputs = self.bert(input_ids, attention_mask, token_type_ids)
pooled_output = outputs[1]
return pooled_output
model_name = 'kykim/bert-kor-base'
tokenizer = AutoTokenizer.from_pretrained(model_name)
q_encoder = BertEncoder.from_pretrained("thingsu/koDPR_question")
p_encoder = BertEncoder.from_pretrained("thingsu/koDPR_context")
</pre>
</code>
|
|
thingsu/koDPR_question | 2021-05-24T02:47:00.000Z | [
"pytorch",
"bert",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin"
] | thingsu | 193 | transformers | fintuned the kykim/bert-kor-base model as a dense passage retrieval context encoder by KLUE dataset
this link is experiment result. https://wandb.ai/thingsu/DenseRetrieval
Corpus : Korean Wikipedia Corpus
Trained Strategy :
- Pretrained Model : kykim/bert-kor-base
- Inverse Cloze Task : 16 Epoch, by korquad v 1.0, KLUE MRC dataset
- In-batch Negatives : 12 Epoch, by KLUE MRC dataset, random sampling between Sparse Retrieval(TF-IDF) top 100 passage per each query
You must need to use Korean wikipedia corpus
<pre>
<code>
from Transformers import AutoTokenizer, BertPreTrainedModel, BertModel
class BertEncoder(BertPreTrainedModel):
def __init__(self, config):
super(BertEncoder, self).__init__(config)
self.bert = BertModel(config)
self.init_weights()
def forward(self, input_ids, attention_mask=None, token_type_ids=None):
outputs = self.bert(input_ids, attention_mask, token_type_ids)
pooled_output = outputs[1]
return pooled_output
model_name = 'kykim/bert-kor-base'
tokenizer = AutoTokenizer.from_pretrained(model_name)
q_encoder = BertEncoder.from_pretrained("thingsu/koDPR_question")
p_encoder = BertEncoder.from_pretrained("thingsu/koDPR_context")
</code>
</pre> |
|
thomasdehaene/gpt2-large-dutch-finetune-oscar-10m-3epoch | 2021-05-23T13:08:54.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | thomasdehaene | 33 | transformers | |
thomwolf/test-model | 2021-01-21T14:17:13.000Z | [] | [
".gitattributes"
] | thomwolf | 0 | |||
thomwolf/vqgan_imagenet_f16_1024 | 2021-06-08T21:16:25.000Z | [
"pytorch",
"vqgan_model",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin"
] | thomwolf | 3 | transformers | ||
thu-coai/CDial-GPT2_LCCC-base | 2020-12-23T07:10:27.000Z | [
"pytorch",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | thu-coai | 25 | transformers | # CDial-GPT2_LCCC-base
https://github.com/thu-coai/CDial-GPT |
|
thu-coai/CDial-GPT_LCCC-base | 2020-12-23T06:47:44.000Z | [
"pytorch",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | thu-coai | 21 | transformers | # CDial-GPT_LCCC-base
https://github.com/thu-coai/CDial-GPT |
|
thu-coai/CDial-GPT_LCCC-large | 2020-12-23T05:56:25.000Z | [
"pytorch",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | thu-coai | 83 | transformers | # CDial-GPT_LCCC-large
https://github.com/thu-coai/CDial-GPT |
|
thu-coai/ct5-small | 2020-12-16T08:50:44.000Z | [] | [
".gitattributes"
] | thu-coai | 0 | |||
thunlp/Lawformer | 2021-05-09T08:10:17.000Z | [
"pytorch",
"longformer",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | thunlp | 344 | transformers | ## Lawformer
### Introduction
This repository provides the source code and checkpoints of the paper "Lawformer: A Pre-trained Language Model forChinese Legal Long Documents". You can download the checkpoint from the [huggingface model hub](https://huggingface.co/xcjthu/Lawformer) or from [here](https://data.thunlp.org/legal/Lawformer.zip).
### Easy Start
We have uploaded our model to the huggingface model hub. Make sure you have installed transformers.
```python
>>> from transformers import AutoModel, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("hfl/chinese-roberta-wwm-ext")
>>> model = AutoModel.from_pretrained("thunlp/Lawformer")
>>> inputs = tokenizer("任某提起诉讼,请求判令解除婚姻关系并对夫妻共同财产进行分割。", return_tensors="pt")
>>> outputs = model(**inputs)
```
### Cite
If you use the pre-trained models, please cite this paper:
```
@article{xiao2021lawformer,
title={Lawformer: A Pre-trained Language Model forChinese Legal Long Documents},
author={Xiao, Chaojun and Hu, Xueyu and Liu, Zhiyuan and Tu, Cunchao and Sun, Maosong},
year={2021}
}
```
|
thunlp/neuba-bert | 2021-06-11T06:47:57.000Z | [
"pytorch",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | thunlp | 5 | transformers |
Subsets and Splits