Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-xxlarge-v2-squad2-covid-qa-deepset
This model is a fine-tuned version of [mfeb/albert-xxlarge-v2-squad2](https://huggingface.co/mfeb/albert-xxlarge-v2-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "albert-xxlarge-v2-squad2-covid-qa-deepset", "results": []}]} | armageddon/albert-xxlarge-v2-squad2-covid-qa-deepset | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid_qa_analysis_bert_base_uncased_squad2
This model is a fine-tuned version of [twmkn9/bert-base-uncased-squad2](https://huggingface.co/twmkn9/bert-base-uncased-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
| {"tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "covid_qa_analysis_bert_base_uncased_squad2", "results": []}]} | armageddon/bert-base-uncased-squad2-covid-qa-deepset | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-squad2-covid-qa-deepset
This model is a fine-tuned version of [phiyodr/bert-large-finetuned-squad2](https://huggingface.co/phiyodr/bert-large-finetuned-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "bert-large-uncased-squad2-covid-qa-deepset", "results": []}]} | armageddon/bert-large-uncased-squad2-covid-qa-deepset | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid_qa_analysis_albert_base_squad_v2
This model is a fine-tuned version of [abhilash1910/albert-squad-v2](https://huggingface.co/abhilash1910/albert-squad-v2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "covid_qa_analysis_albert_base_squad_v2", "results": []}]} | armageddon/albert-squad-v2-covid-qa-deepset | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid_qa_analysis_roberta-base-squad2
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "covid_qa_analysis_roberta-base-squad2", "results": []}]} | armageddon/roberta-base-squad2-covid-qa-deepset | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid_qa_analysis_roberta-large-squad2
This model is a fine-tuned version of [deepset/roberta-large-squad2](https://huggingface.co/deepset/roberta-large-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "covid_qa_analysis_roberta-large-squad2", "results": []}]} | armageddon/roberta-large-squad2-covid-qa-deepset | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad2-covid-qa-deepset
This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
| {"tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "distilbert-base-uncased-squad2-covid-qa-deepset", "results": []}]} | armageddon/distilbert-base-uncased-squad2-covid-qa-deepset | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-squad2-covid-qa-deepset
This model is a fine-tuned version of [deepset/electra-base-squad2](https://huggingface.co/deepset/electra-base-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "electra-base-squad2-covid-qa-deepset", "results": []}]} | armageddon/electra-base-squad2-covid-qa-deepset | null | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0963 | 1.0 | 2346 | 7.0570 |
| 6.9063 | 2.0 | 4692 | 6.8721 |
| 6.8585 | 3.0 | 7038 | 6.8931 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-cased-wikitext2", "results": []}]} | arman0320/bert-base-cased-wikitext2 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | arman0320/gpt2-wikitext2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | arman223147/gpt2-wikitext2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | zhihan1996/DNA_bert_3 | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | zhihan1996/DNA_bert_4 | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | zhihan1996/DNA_bert_5 | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | zhihan1996/DNA_bert_6 | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | arminarj/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | **A casual chatbot**
This is a dialogpt medium fine tuned to talk like Tony Stark, Currently its only trained upon the script of Iron man 3 | {"language": ["en"], "license": "MIT", "tags": ["conversational"]} | arnav7633/DialoGPT-medium-tony_stark | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
# Model description
**bert-base-uncased-kin** is a model based on the fine-tuned BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Kinyarwanda corpus **(kin)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Kinyarwandan corpus **(kin)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**bert-base-uncased-kin**| 75.00 |80.09|77.47
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/bert-base-uncased-kin")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/bert-base-uncased-kin")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Rayon Sports yasinyishije rutahizamu w’Umurundi"
ner_results = nlp(example)
print(ner_results)
``` | {"language": ["kin"], "license": "apache-2.0", "tags": ["NER"], "datasets": ["masakhaner"], "metrics": ["f1", "precision", "recall"], "widget": [{"text": "Ambasaderi Bellomo yavuze ko bishimira ubufatanye burambye hagati ya EU n\u2019u Rwanda, bushingiye nanone ku bufatanye hagati y\u2019imigabane ya Afurika n\u2019u Burayi."}]} | arnolfokam/bert-base-uncased-kin | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"kin",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
# Model description
**bert-base-uncased-pcm** is a model based on the fine-tuned BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Nigerian Pidgin corpus **(pcm)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(pcm)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**bert-base-uncased-pcm**| 88.61 | 84.17 | 86.33
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/bert-base-uncased-pcm")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/bert-base-uncased-pcm")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."
ner_results = nlp(example)
print(ner_results)
``` | {"language": ["pcm"], "license": "apache-2.0", "tags": ["NER"], "datasets": ["masakhaner"], "metrics": ["f1", "precision", "recall"], "widget": [{"text": "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."}]} | arnolfokam/bert-base-uncased-pcm | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"pcm",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
# Model description
**bert-base-uncased-swa** is a model based on the fine-tuned BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Swahili corpus **(swa)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(swa)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**bert-base-uncased-swa**| 83.38 | 89.32 | 86.26
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/bert-base-uncased-swa")
model = AutoModelForTokenClassification.from_pretrained("bert-base-uncased-swa")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."
ner_results = nlp(example)
print(ner_results)
``` | {"language": ["swa"], "license": "apache-2.0", "tags": ["NER"], "datasets": ["masakhaner"], "metrics": ["f1", "precision", "recall"], "widget": [{"text": "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."}]} | arnolfokam/bert-base-uncased-swa | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"swa",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
# Model description
**mbert-base-uncased-kin** is a model based on the fine-tuned multilingual BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Kinyarwanda corpus **(kin)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Kinyarwandan corpus **(kin)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**mbert-base-uncased-kin**| 81.35 | 83.98 | 82.64
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/mbert-base-uncased-kin")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/mbert-base-uncased-kin")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Rayon Sports yasinyishije rutahizamu w’Umurundi"
ner_results = nlp(example)
print(ner_results)
``` | {"language": ["kin"], "license": "apache-2.0", "tags": ["NER"], "datasets": ["masakhaner"], "metrics": ["f1", "precision", "recall"], "widget": [{"text": "Ambasaderi Bellomo yavuze ko bishimira ubufatanye burambye hagati ya EU n\u2019u Rwanda, bushingiye nanone ku bufatanye hagati y\u2019imigabane ya Afurika n\u2019u Burayi."}]} | arnolfokam/mbert-base-uncased-kin | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"kin",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
# Model description
**mbert-base-uncased-ner-kin** is a model based on the fine-tuned Multilingual BERT base uncased model, previously fine-tuned for Named Entity Recognition using 10 high-resourced languages. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Kinyarwanda corpus **(kin)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Kinyarwandan corpus **(kin)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**mbert-base-uncased-ner-kin**| 81.95 |81.55 |81.75
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/mbert-base-uncased-ner-kin")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/mbert-base-uncased-ner-kin")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Rayon Sports yasinyishije rutahizamu w’Umurundi"
ner_results = nlp(example)
print(ner_results)
``` | {"language": ["kin"], "license": "apache-2.0", "tags": ["NER"], "datasets": ["masakhaner"], "metrics": ["f1", "precision", "recall"], "widget": [{"text": "Ambasaderi Bellomo yavuze ko bishimira ubufatanye burambye hagati ya EU n\u2019u Rwanda, bushingiye nanone ku bufatanye hagati y\u2019imigabane ya Afurika n\u2019u Burayi."}]} | arnolfokam/mbert-base-uncased-ner-kin | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"kin",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
# Model description
**mbert-base-uncased-ner-pcm** is a model based on the fine-tuned Multilingual BERT base uncased model, previously fine-tuned for Named Entity Recognition using 10 high-resourced languages. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Nigerian Pidgin corpus **(pcm)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(pcm)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**mbert-base-uncased-ner-pcm**| 90.38 | 82.44 | 86.23
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/mbert-base-uncased-ner-pcm")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/mbert-base-uncased-ner-pcm")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."
ner_results = nlp(example)
print(ner_results)
``` | {"language": ["pcm"], "license": "apache-2.0", "tags": ["NER"], "datasets": ["masakhaner"], "metrics": ["f1", "precision", "recall"], "widget": [{"text": "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."}]} | arnolfokam/mbert-base-uncased-ner-pcm | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"pcm",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
# Model description
**mbert-base-uncased-ner-swa** is a model based on the fine-tuned Multilingual BERT base uncased model, previously fine-tuned for Named Entity Recognition using 10 high-resourced languages. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Swahili corpus **(swa)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(swa)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**mbert-base-uncased-ner-swa**| 82.85 | 88.13 | 85.41
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/mbert-base-uncased-ner-swa")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/mbert-base-uncased-ner-swa")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."
ner_results = nlp(example)
print(ner_results)
``` | {"language": ["swa"], "license": "apache-2.0", "tags": ["NER"], "datasets": ["masakhaner"], "metrics": ["f1", "precision", "recall"], "widget": [{"text": "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."}]} | arnolfokam/mbert-base-uncased-ner-swa | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"swa",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
# Model description
**mbert-base-uncased-pcm** is a model based on the fine-tuned Multilingual BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Nigerian Pidgin corpus **(pcm)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(pcm)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**mbert-base-uncased-pcm**| 90.46 | 83.23 | 86.69
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/mbert-base-uncased-pcm")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/mbert-base-uncased-pcm")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."
ner_results = nlp(example)
print(ner_results)
``` | {"language": ["pcm"], "license": "apache-2.0", "tags": ["NER"], "datasets": ["masakhaner"], "metrics": ["f1", "precision", "recall"], "widget": [{"text": "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."}]} | arnolfokam/mbert-base-uncased-pcm | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"pcm",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
# Model description
**mbert-base-uncased-swa** is a model based on the fine-tuned Multilingual BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Swahili corpus **(swa)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(swa)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**mbert-base-uncased-swa**| 85.59 | 90.80 | 88.12
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/mbert-base-uncased-swa")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/mbert-base-uncased-swa")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."
ner_results = nlp(example)
print(ner_results)
``` | {"language": ["swa"], "license": "apache-2.0", "tags": ["NER"], "datasets": ["masakhaner"], "metrics": ["f1", "precision", "recall"], "widget": [{"text": "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."}]} | arnolfokam/mbert-base-uncased-swa | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"swa",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
# Model description
**roberta-base-kin** is a model based on the fine-tuned RoBERTa base model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Kinyarwanda corpus **(kin)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Kinyarwandan corpus **(kin)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**roberta-base-kin**| 76.26 | 80.58 |78.36
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/roberta-base-kin")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/roberta-base-kin")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Rayon Sports yasinyishije rutahizamu w’Umurundi"
ner_results = nlp(example)
print(ner_results)
``` | {"language": ["kin"], "license": "apache-2.0", "tags": ["NER"], "datasets": ["masakhaner"], "metrics": ["f1", "precision", "recall"], "widget": [{"text": "Ambasaderi Bellomo yavuze ko bishimira ubufatanye burambye hagati ya EU n\u2019u Rwanda, bushingiye nanone ku bufatanye hagati y\u2019imigabane ya Afurika n\u2019u Burayi."}]} | arnolfokam/roberta-base-kin | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"NER",
"kin",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
# Model description
**roberta-base-pcm** is a model based on the fine-tuned RoBERTa base model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Nigerian Pidgin corpus **(pcm)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(pcm)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**roberta-base-pcm**| 88.55 | 82.45 | 85.39
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/roberta-base-pcm")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/roberta-base-pcm")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."
ner_results = nlp(example)
print(ner_results)
``` | {"language": ["pcm"], "license": "apache-2.0", "tags": ["NER"], "datasets": ["masakhaner"], "metrics": ["f1", "precision", "recall"], "widget": [{"text": "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."}]} | arnolfokam/roberta-base-pcm | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"NER",
"pcm",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
# Model description
**roberta-base-swa** is a model based on the fine-tuned RoBERTa base model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Swahili corpus **(swa)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(swa)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**roberta-base-swa**| 80.58 | 86.79 | 83.57
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/roberta-base-swa")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/roberta-base-swa")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."
ner_results = nlp(example)
print(ner_results)
``` | {"language": ["swa"], "license": "apache-2.0", "tags": ["NER"], "datasets": ["masakhaner"], "metrics": ["f1", "precision", "recall"], "widget": [{"text": "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."}]} | arnolfokam/roberta-base-swa | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"NER",
"swa",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kobert-finetuned-squad_kor_v1
This model is a fine-tuned version of [monologg/kobert](https://huggingface.co/monologg/kobert) on the squad_kor_v1 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.0155 | 1.0 | 3808 | 4.0928 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["squad_kor_v1"]} | arogyaGurkha/kobert-finetuned-squad_kor_v1 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad_kor_v1",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# koelectra-base-discriminator-finetuned-squad_kor_v1
This model is a fine-tuned version of [monologg/koelectra-base-discriminator](https://huggingface.co/monologg/koelectra-base-discriminator) on the squad_kor_v1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5774 | 1.0 | 4025 | 0.5589 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["squad_kor_v1"]} | arogyaGurkha/koelectra-base-discriminator-finetuned-squad_kor_v1 | null | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:squad_kor_v1",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | arogyaGurkha/koelectra-base-v3-finetuned-korquad-finetuned-squad_kor_v1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
Connect me on LinkedIn
- [linkedin.com/in/arpanghoshal](https://www.linkedin.com/in/arpanghoshal)
## What is GoEmotions
Dataset labelled 58000 Reddit comments with 28 emotions
- admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise + neutral
## What is RoBERTa
RoBERTa builds on BERT’s language masking strategy and modifies key hyperparameters in BERT, including removing BERT’s next-sentence pretraining objective, and training with much larger mini-batches and learning rates. RoBERTa was also trained on an order of magnitude more data than BERT, for a longer amount of time. This allows RoBERTa representations to generalize even better to downstream tasks compared to BERT.
## Hyperparameters
| Parameter | |
| ----------------- | :---: |
| Learning rate | 5e-5 |
| Epochs | 10 |
| Max Seq Length | 50 |
| Batch size | 16 |
| Warmup Proportion | 0.1 |
| Epsilon | 1e-8 |
## Results
Best Result of `Macro F1` - 49.30%
## Usage
```python
from transformers import RobertaTokenizerFast, TFRobertaForSequenceClassification, pipeline
tokenizer = RobertaTokenizerFast.from_pretrained("arpanghoshal/EmoRoBERTa")
model = TFRobertaForSequenceClassification.from_pretrained("arpanghoshal/EmoRoBERTa")
emotion = pipeline('sentiment-analysis',
model='arpanghoshal/EmoRoBERTa')
emotion_labels = emotion("Thanks for using it.")
print(emotion_labels)
```
Output
```
[{'label': 'gratitude', 'score': 0.9964383244514465}]
```
| {"language": "en", "license": "mit", "tags": ["text-classification", "tensorflow", "roberta"], "datasets": ["go_emotions"]} | arpanghoshal/EmoRoBERTa | null | [
"transformers",
"tf",
"roberta",
"text-classification",
"tensorflow",
"en",
"dataset:go_emotions",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
sentence-similarity | sentence-transformers |
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** | | {"language": "en", "license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | arredondos/my_sentence_transformer | null | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | arshyajabbari/Persian_Wav2Vec2_Test | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers | {} | arshyajabbari/wav2vec2-large-persian-demo | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | arshyajabbari/wav2vec2-persian | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
token-classification | transformers | {} | artemis13fowl/bert-finetuned-ner-accelerate | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
token-classification | transformers | {} | artemis13fowl/bert-finetuned-ner | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | artemis13fowl/code-search-net-tokenizer | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5756 | 2.0 | 314 | 2.4230 |
| 2.5395 | 3.0 | 471 | 2.4358 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imdb"], "model-index": [{"name": "distilbert-base-uncased-finetuned-imdb", "results": []}]} | artemis13fowl/distilbert-base-uncased-finetuned-imdb | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-lv-v05
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3862
- Wer: 0.2588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8836 | 2.81 | 400 | 0.8722 | 0.7244 |
| 0.5365 | 5.63 | 800 | 0.4622 | 0.4812 |
| 0.277 | 8.45 | 1200 | 0.4348 | 0.4056 |
| 0.1947 | 11.27 | 1600 | 0.4223 | 0.3636 |
| 0.1655 | 14.08 | 2000 | 0.4084 | 0.3465 |
| 0.1441 | 16.9 | 2400 | 0.4329 | 0.3497 |
| 0.121 | 19.72 | 2800 | 0.4371 | 0.3324 |
| 0.1062 | 22.53 | 3200 | 0.4202 | 0.3198 |
| 0.0937 | 25.35 | 3600 | 0.4063 | 0.3265 |
| 0.0871 | 28.17 | 4000 | 0.4253 | 0.3255 |
| 0.0755 | 30.98 | 4400 | 0.4368 | 0.3194 |
| 0.0627 | 33.8 | 4800 | 0.4067 | 0.2908 |
| 0.0595 | 36.62 | 5200 | 0.3929 | 0.2973 |
| 0.0523 | 39.44 | 5600 | 0.3748 | 0.2817 |
| 0.0434 | 42.25 | 6000 | 0.3769 | 0.2711 |
| 0.0391 | 45.07 | 6400 | 0.3901 | 0.2653 |
| 0.0319 | 47.88 | 6800 | 0.3862 | 0.2588 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-lv-v05", "results": []}]} | artursz/wav2vec2-large-xls-r-300m-lv-v05 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | artyeth/Dorian | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | arubenecia/gpt2-wikitext2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {"license": "gpl-3.0"} | arunkottilukkal/multi-lingual-ner-ar-de-en-es-fr-it-lv-nl-pt-zh | null | [
"license:gpl-3.0",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers | {} | arunkumar629/distilbert-base-uncased-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-squad
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1893 | 1.0 | 3052 | 0.2808 |
| 0.1209 | 2.0 | 6104 | 0.2787 |
| 0.069 | 3.0 | 9156 | 0.3222 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "albert-base-v2-finetuned-squad", "results": []}]} | arvalinno/albert-base-v2-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-indosquad-v2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6650
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.9015 | 1.0 | 9676 | 1.5706 |
| 1.6438 | 2.0 | 19352 | 1.5926 |
| 1.4714 | 3.0 | 29028 | 1.5253 |
| 1.3486 | 4.0 | 38704 | 1.6650 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-indosquad-v2", "results": []}]} | arvalinno/distilbert-base-uncased-finetuned-indosquad-v2 | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | arvalinno/distilbert-base-uncased-finetuned-squad-ori | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.7604 | 1.0 | 6366 | 1.5329 |
| 1.4784 | 2.0 | 12732 | 1.3930 |
| 1.3082 | 3.0 | 19098 | 1.4232 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]} | arvalinno/distilbert-base-uncased-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | arvalinno/indobert-base-p2-finetuned-indosquad-v2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | aryanbhosale/DialoGPT-medium-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | aryanpatke/bert-finetuned-mrpc | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aryanpatke/bert-finetuned-ner | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aryanpatke/code-search-net | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aryanpatke/distilbert-finetuned-imdb | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aryanpatke/marian-finetuned-kde4-en-to-fr | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
# Harry porter DialoGPT model | {"tags": ["conversational"]} | asad/DialoGPT-small-harryporter_bot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | asaelavia/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers |
# Arabic-ALBERT Base
Arabic edition of ALBERT Base pretrained language model
_If you use any of these models in your work, please cite this work as:_
```
@software{ali_safaya_2020_4718724,
author = {Ali Safaya},
title = {Arabic-ALBERT},
month = aug,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.4718724},
url = {https://doi.org/10.5281/zenodo.4718724}
}
```
## Pretraining data
The models were pretrained on ~4.4 Billion words:
- Arabic version of [OSCAR](https://oscar-corpus.com/) (unshuffled version of the corpus) - filtered from [Common Crawl](http://commoncrawl.org/)
- Recent dump of Arabic [Wikipedia](https://dumps.wikimedia.org/backup-index.html)
__Notes on training data:__
- Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER.
- Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters do not have upper or lower case, there is no cased and uncased version of the model.
- The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too.
## Pretraining details
- These models were trained using Google ALBERT's github [repository](https://github.com/google-research/albert) on a single TPU v3-8 provided for free from [TFRC](https://www.tensorflow.org/tfrc).
- Our pretraining procedure follows training settings of bert with some changes: trained for 7M training steps with batchsize of 64, instead of 125K with batchsize of 4096.
## Models
| | albert-base | albert-large | albert-xlarge |
|:---:|:---:|:---:|:---:|
| Hidden Layers | 12 | 24 | 24 |
| Attention heads | 12 | 16 | 32 |
| Hidden size | 768 | 1024 | 2048 |
## Results
For further details on the models performance or any other queries, please refer to [Arabic-ALBERT](https://github.com/KUIS-AI-Lab/Arabic-ALBERT/)
## How to use
You can use these models by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
# loading the tokenizer
base_tokenizer = AutoTokenizer.from_pretrained("kuisailab/albert-base-arabic")
# loading the model
base_model = AutoModelForMaskedLM.from_pretrained("kuisailab/albert-base-arabic")
```
## Acknowledgement
Thanks to Google for providing free TPU for the training process and for Huggingface for hosting these models on their servers 😊
| {"language": "ar", "tags": ["ar", "masked-lm"], "datasets": ["oscar", "wikipedia"]} | asafaya/albert-base-arabic | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"ar",
"masked-lm",
"dataset:oscar",
"dataset:wikipedia",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# Arabic-ALBERT Large
Arabic edition of ALBERT Large pretrained language model
_If you use any of these models in your work, please cite this work as:_
```
@software{ali_safaya_2020_4718724,
author = {Ali Safaya},
title = {Arabic-ALBERT},
month = aug,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.4718724},
url = {https://doi.org/10.5281/zenodo.4718724}
}
```
## Pretraining data
The models were pretrained on ~4.4 Billion words:
- Arabic version of [OSCAR](https://oscar-corpus.com/) (unshuffled version of the corpus) - filtered from [Common Crawl](http://commoncrawl.org/)
- Recent dump of Arabic [Wikipedia](https://dumps.wikimedia.org/backup-index.html)
__Notes on training data:__
- Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER.
- Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters do not have upper or lower case, there is no cased and uncased version of the model.
- The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too.
## Pretraining details
- These models were trained using Google ALBERT's github [repository](https://github.com/google-research/albert) on a single TPU v3-8 provided for free from [TFRC](https://www.tensorflow.org/tfrc).
- Our pretraining procedure follows training settings of bert with some changes: trained for 7M training steps with batchsize of 64, instead of 125K with batchsize of 4096.
## Models
| | albert-base | albert-large | albert-xlarge |
|:---:|:---:|:---:|:---:|
| Hidden Layers | 12 | 24 | 24 |
| Attention heads | 12 | 16 | 32 |
| Hidden size | 768 | 1024 | 2048 |
## Results
For further details on the models performance or any other queries, please refer to [Arabic-ALBERT](https://github.com/KUIS-AI-Lab/Arabic-ALBERT/)
## How to use
You can use these models by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
# loading the tokenizer
tokenizer = AutoTokenizer.from_pretrained("kuisailab/albert-large-arabic")
# loading the model
model = AutoModelForMaskedLM.from_pretrained("kuisailab/albert-large-arabic")
```
## Acknowledgement
Thanks to Google for providing free TPU for the training process and for Huggingface for hosting these models on their servers 😊
| {"language": "ar", "tags": ["ar", "masked-lm"], "datasets": ["oscar", "wikipedia"]} | asafaya/albert-large-arabic | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"ar",
"masked-lm",
"dataset:oscar",
"dataset:wikipedia",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# Arabic-ALBERT Xlarge
Arabic edition of ALBERT Xlarge pretrained language model
_If you use any of these models in your work, please cite this work as:_
```
@software{ali_safaya_2020_4718724,
author = {Ali Safaya},
title = {Arabic-ALBERT},
month = aug,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.4718724},
url = {https://doi.org/10.5281/zenodo.4718724}
}
```
## Pretraining data
The models were pretrained on ~4.4 Billion words:
- Arabic version of [OSCAR](https://oscar-corpus.com/) (unshuffled version of the corpus) - filtered from [Common Crawl](http://commoncrawl.org/)
- Recent dump of Arabic [Wikipedia](https://dumps.wikimedia.org/backup-index.html)
__Notes on training data:__
- Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER.
- Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters do not have upper or lower case, there is no cased and uncased version of the model.
- The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too.
## Pretraining details
- These models were trained using Google ALBERT's github [repository](https://github.com/google-research/albert) on a single TPU v3-8 provided for free from [TFRC](https://www.tensorflow.org/tfrc).
- Our pretraining procedure follows training settings of bert with some changes: trained for 7M training steps with batchsize of 64, instead of 125K with batchsize of 4096.
## Models
| | albert-base | albert-large | albert-xlarge |
|:---:|:---:|:---:|:---:|
| Hidden Layers | 12 | 24 | 24 |
| Attention heads | 12 | 16 | 32 |
| Hidden size | 768 | 1024 | 2048 |
## Results
For further details on the models performance or any other queries, please refer to [Arabic-ALBERT](https://github.com/KUIS-AI-Lab/Arabic-ALBERT/)
## How to use
You can use these models by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
# loading the tokenizer
tokenizer = AutoTokenizer.from_pretrained("kuisailab/albert-xlarge-arabic")
# loading the model
model = AutoModelForMaskedLM.from_pretrained("kuisailab/albert-xlarge-arabic")
```
## Acknowledgement
Thanks to Google for providing free TPU for the training process and for Huggingface for hosting these models on their servers 😊
| {"language": "ar", "tags": ["ar", "masked-lm"], "datasets": ["oscar", "wikipedia"]} | asafaya/albert-xlarge-arabic | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"ar",
"masked-lm",
"dataset:oscar",
"dataset:wikipedia",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# Arabic BERT Model
Pretrained BERT base language model for Arabic
_If you use this model in your work, please cite this paper:_
```
@inproceedings{safaya-etal-2020-kuisail,
title = "{KUISAIL} at {S}em{E}val-2020 Task 12: {BERT}-{CNN} for Offensive Speech Identification in Social Media",
author = "Safaya, Ali and
Abdullatif, Moutasem and
Yuret, Deniz",
booktitle = "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
month = dec,
year = "2020",
address = "Barcelona (online)",
publisher = "International Committee for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.semeval-1.271",
pages = "2054--2059",
}
```
## Pretraining Corpus
`arabic-bert-base` model was pretrained on ~8.2 Billion words:
- Arabic version of [OSCAR](https://traces1.inria.fr/oscar/) - filtered from [Common Crawl](http://commoncrawl.org/)
- Recent dump of Arabic [Wikipedia](https://dumps.wikimedia.org/backup-index.html)
and other Arabic resources which sum up to ~95GB of text.
__Notes on training data:__
- Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER.
- Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters does not have upper or lower case, there is no cased and uncased version of the model.
- The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too.
## Pretraining details
- This model was trained using Google BERT's github [repository](https://github.com/google-research/bert) on a single TPU v3-8 provided for free from [TFRC](https://www.tensorflow.org/tfrc).
- Our pretraining procedure follows training settings of bert with some changes: trained for 3M training steps with batchsize of 128, instead of 1M with batchsize of 256.
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("asafaya/bert-base-arabic")
model = AutoModelForMaskedLM.from_pretrained("asafaya/bert-base-arabic")
```
## Results
For further details on the models performance or any other queries, please refer to [Arabic-BERT](https://github.com/alisafaya/Arabic-BERT)
## Acknowledgement
Thanks to Google for providing free TPU for the training process and for Huggingface for hosting this model on their servers 😊
| {"language": "ar", "datasets": ["oscar", "wikipedia"]} | asafaya/bert-base-arabic | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"ar",
"dataset:oscar",
"dataset:wikipedia",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# Arabic BERT Large Model
Pretrained BERT Large language model for Arabic
_If you use this model in your work, please cite this paper:_
```
@inproceedings{safaya-etal-2020-kuisail,
title = "{KUISAIL} at {S}em{E}val-2020 Task 12: {BERT}-{CNN} for Offensive Speech Identification in Social Media",
author = "Safaya, Ali and
Abdullatif, Moutasem and
Yuret, Deniz",
booktitle = "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
month = dec,
year = "2020",
address = "Barcelona (online)",
publisher = "International Committee for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.semeval-1.271",
pages = "2054--2059",
}
```
## Pretraining Corpus
`arabic-bert-large` model was pretrained on ~8.2 Billion words:
- Arabic version of [OSCAR](https://traces1.inria.fr/oscar/) - filtered from [Common Crawl](http://commoncrawl.org/)
- Recent dump of Arabic [Wikipedia](https://dumps.wikimedia.org/backup-index.html)
and other Arabic resources which sum up to ~95GB of text.
__Notes on training data:__
- Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER.
- Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters does not have upper or lower case, there is no cased and uncased version of the model.
- The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too.
## Pretraining details
- This model was trained using Google BERT's github [repository](https://github.com/google-research/bert) on a single TPU v3-8 provided for free from [TFRC](https://www.tensorflow.org/tfrc).
- Our pretraining procedure follows training settings of bert with some changes: trained for 3M training steps with batchsize of 128, instead of 1M with batchsize of 256.
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("asafaya/bert-large-arabic")
model = AutoModelForMaskedLM.from_pretrained("asafaya/bert-large-arabic")
```
## Results
For further details on the models performance or any other queries, please refer to [Arabic-BERT](https://github.com/alisafaya/Arabic-BERT)
## Acknowledgement
Thanks to Google for providing free TPU for the training process and for Huggingface for hosting this model on their servers 😊
| {"language": "ar", "datasets": ["oscar", "wikipedia"]} | asafaya/bert-large-arabic | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"ar",
"dataset:oscar",
"dataset:wikipedia",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# Arabic BERT Medium Model
Pretrained BERT Medium language model for Arabic
_If you use this model in your work, please cite this paper:_
```
@inproceedings{safaya-etal-2020-kuisail,
title = "{KUISAIL} at {S}em{E}val-2020 Task 12: {BERT}-{CNN} for Offensive Speech Identification in Social Media",
author = "Safaya, Ali and
Abdullatif, Moutasem and
Yuret, Deniz",
booktitle = "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
month = dec,
year = "2020",
address = "Barcelona (online)",
publisher = "International Committee for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.semeval-1.271",
pages = "2054--2059",
}
```
## Pretraining Corpus
`arabic-bert-medium` model was pretrained on ~8.2 Billion words:
- Arabic version of [OSCAR](https://traces1.inria.fr/oscar/) - filtered from [Common Crawl](http://commoncrawl.org/)
- Recent dump of Arabic [Wikipedia](https://dumps.wikimedia.org/backup-index.html)
and other Arabic resources which sum up to ~95GB of text.
__Notes on training data:__
- Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER.
- Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters does not have upper or lower case, there is no cased and uncased version of the model.
- The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too.
## Pretraining details
- This model was trained using Google BERT's github [repository](https://github.com/google-research/bert) on a single TPU v3-8 provided for free from [TFRC](https://www.tensorflow.org/tfrc).
- Our pretraining procedure follows training settings of bert with some changes: trained for 3M training steps with batchsize of 128, instead of 1M with batchsize of 256.
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("asafaya/bert-medium-arabic")
model = AutoModelForMaskedLM.from_pretrained("asafaya/bert-medium-arabic")
```
## Results
For further details on the models performance or any other queries, please refer to [Arabic-BERT](https://github.com/alisafaya/Arabic-BERT)
## Acknowledgement
Thanks to Google for providing free TPU for the training process and for Huggingface for hosting this model on their servers 😊
| {"language": "ar", "datasets": ["oscar", "wikipedia"]} | asafaya/bert-medium-arabic | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"ar",
"dataset:oscar",
"dataset:wikipedia",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# Arabic BERT Mini Model
Pretrained BERT Mini language model for Arabic
_If you use this model in your work, please cite this paper:_
```
@inproceedings{safaya-etal-2020-kuisail,
title = "{KUISAIL} at {S}em{E}val-2020 Task 12: {BERT}-{CNN} for Offensive Speech Identification in Social Media",
author = "Safaya, Ali and
Abdullatif, Moutasem and
Yuret, Deniz",
booktitle = "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
month = dec,
year = "2020",
address = "Barcelona (online)",
publisher = "International Committee for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.semeval-1.271",
pages = "2054--2059",
}
```
## Pretraining Corpus
`arabic-bert-mini` model was pretrained on ~8.2 Billion words:
- Arabic version of [OSCAR](https://traces1.inria.fr/oscar/) - filtered from [Common Crawl](http://commoncrawl.org/)
- Recent dump of Arabic [Wikipedia](https://dumps.wikimedia.org/backup-index.html)
and other Arabic resources which sum up to ~95GB of text.
__Notes on training data:__
- Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER.
- Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters does not have upper or lower case, there is no cased and uncased version of the model.
- The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too.
## Pretraining details
- This model was trained using Google BERT's github [repository](https://github.com/google-research/bert) on a single TPU v3-8 provided for free from [TFRC](https://www.tensorflow.org/tfrc).
- Our pretraining procedure follows training settings of bert with some changes: trained for 3M training steps with batchsize of 128, instead of 1M with batchsize of 256.
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("asafaya/bert-mini-arabic")
model = AutoModelForMaskedLM.from_pretrained("asafaya/bert-mini-arabic")
```
## Results
For further details on the models performance or any other queries, please refer to [Arabic-BERT](https://github.com/alisafaya/Arabic-BERT)
## Acknowledgement
Thanks to Google for providing free TPU for the training process and for Huggingface for hosting this model on their servers 😊
| {"language": "ar", "datasets": ["oscar", "wikipedia"]} | asafaya/bert-mini-arabic | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"ar",
"dataset:oscar",
"dataset:wikipedia",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `research-backup/bart-base-squad-qg-default`
This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without parameter search (default configuration is taken from [ERNIE-GEN](https://arxiv.org/abs/2001.11314)).
### Overview
- **Language model:** [facebook/bart-base](https://huggingface.co/facebook/bart-base)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/bart-base-squad-qg-default")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/bart-base-squad-qg-default")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-base-squad-qg-default/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.87 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 56.88 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 40.75 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 31.18 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 24.4 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 25.99 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 64.48 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 52.49 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: None
- model: facebook/bart-base
- max_length: 512
- max_length_output: 32
- epoch: 10
- batch: 32
- lr: 1.25e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 1
- label_smoothing: 0.1
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-base-squad-qg-default/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 1"}, {"text": "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 2"}, {"text": "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> .", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "research-backup/bart-base-squad-qg-default", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 24.4, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 52.49, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 25.99, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.87, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 64.48, "name": "MoverScore (Question Generation)"}]}]}]} | research-backup/bart-base-squad-qg-default | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2001.11314",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `research-backup/bart-base-squad-qg-no-answer`
This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without answer information, i.e. generate a question only given a paragraph (note that normal model is fine-tuned to generate a question given a pargraph and an associated answer in the paragraph).
### Overview
- **Language model:** [facebook/bart-base](https://huggingface.co/facebook/bart-base)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/bart-base-squad-qg-no-answer")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/bart-base-squad-qg-no-answer")
output = pipe("<hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl>")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-base-squad-qg-no-answer/raw/main/eval/metric.first.sentence.paragraph_sentence.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.38 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 52.64 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 37.04 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 28.15 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 21.97 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 23.72 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 63.07 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 49.7 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_sentence']
- output_types: ['question']
- prefix_types: None
- model: facebook/bart-base
- max_length: 512
- max_length_output: 32
- epoch: 4
- batch: 32
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 8
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-base-squad-qg-no-answer/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "<hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl>", "example_title": "Question Generation Example 1"}, {"text": "<hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl>", "example_title": "Question Generation Example 2"}, {"text": "<hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records . <hl>", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "research-backup/bart-base-squad-qg-no-answer", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 21.97, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 49.7, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 23.72, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.38, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 63.07, "name": "MoverScore (Question Generation)"}]}]}]} | research-backup/bart-base-squad-qg-no-answer | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `research-backup/bart-base-squad-qg-no-paragraph`
This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without pargraph information but only the sentence that contains the answer.
### Overview
- **Language model:** [facebook/bart-base](https://huggingface.co/facebook/bart-base)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/bart-base-squad-qg-no-paragraph")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/bart-base-squad-qg-no-paragraph")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-base-squad-qg-no-paragraph/raw/main/eval/metric.first.sentence.sentence_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.7 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 55.85 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 39.85 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 30.44 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 23.86 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 25.18 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 63.85 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 51.43 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['sentence_answer']
- output_types: ['question']
- prefix_types: None
- model: facebook/bart-base
- max_length: 128
- max_length_output: 32
- epoch: 3
- batch: 64
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-base-squad-qg-no-paragraph/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 1"}, {"text": "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 2"}, {"text": "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> .", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "research-backup/bart-base-squad-qg-no-paragraph", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 23.86, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 51.43, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 25.18, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.7, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 63.85, "name": "MoverScore (Question Generation)"}]}]}]} | research-backup/bart-base-squad-qg-no-paragraph | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `lmqg/bart-base-squad-qg`
This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [facebook/bart-base](https://huggingface.co/facebook/bart-base)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/bart-base-squad-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/bart-base-squad-qg")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-base-squad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.87 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 56.92 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 40.98 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 31.44 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 24.68 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 26.05 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 64.47 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 52.66 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/bart-base-squad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 95.49 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedF1Score (MoverScore) | 70.38 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (BERTScore) | 95.55 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (MoverScore) | 70.67 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (BERTScore) | 95.44 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (MoverScore) | 70.1 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/bart-base-squad-ae`](https://huggingface.co/lmqg/bart-base-squad-ae). [raw metric file](https://huggingface.co/lmqg/bart-base-squad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.lmqg_bart-base-squad-ae.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 92.84 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedF1Score (MoverScore) | 64.24 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (BERTScore) | 92.75 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (MoverScore) | 64.46 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (BERTScore) | 92.95 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (MoverScore) | 64.11 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metrics (Question Generation, Out-of-Domain)***
| Dataset | Type | BERTScore| Bleu_4 | METEOR | MoverScore | ROUGE_L | Link |
|:--------|:-----|---------:|-------:|-------:|-----------:|--------:|-----:|
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | amazon | 90.49 | 5.82 | 21.27 | 60.27 | 23.82 | [link](https://huggingface.co/lmqg/bart-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.amazon.json) |
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | new_wiki | 93.07 | 10.73 | 26.23 | 65.67 | 28.44 | [link](https://huggingface.co/lmqg/bart-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.new_wiki.json) |
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | nyt | 92.36 | 7.65 | 24.43 | 63.69 | 23.9 | [link](https://huggingface.co/lmqg/bart-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.nyt.json) |
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | reddit | 90.57 | 5.38 | 20.4 | 60.14 | 21.41 | [link](https://huggingface.co/lmqg/bart-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.reddit.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | books | 87.75 | 0.0 | 11.52 | 55.21 | 10.77 | [link](https://huggingface.co/lmqg/bart-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.books.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | electronics | 87.6 | 0.0 | 14.87 | 56.07 | 14.29 | [link](https://huggingface.co/lmqg/bart-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.electronics.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | grocery | 87.38 | 0.6 | 15.53 | 56.63 | 12.49 | [link](https://huggingface.co/lmqg/bart-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.grocery.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | movies | 87.73 | 1.08 | 12.86 | 55.55 | 13.9 | [link](https://huggingface.co/lmqg/bart-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.movies.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | restaurants | 87.71 | 0.0 | 11.47 | 54.91 | 12.16 | [link](https://huggingface.co/lmqg/bart-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.restaurants.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | tripadvisor | 88.78 | 1.02 | 13.92 | 55.91 | 13.41 | [link](https://huggingface.co/lmqg/bart-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.tripadvisor.json) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: None
- model: facebook/bart-base
- max_length: 512
- max_length_output: 32
- epoch: 7
- batch: 32
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 8
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-base-squad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 1"}, {"text": "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 2"}, {"text": "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> .", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "lmqg/bart-base-squad-qg", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 24.68, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 52.66, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 26.05, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.87, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 64.47, "name": "MoverScore (Question Generation)"}, {"type": "qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer", "value": 95.49, "name": "QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer", "value": 95.44, "name": "QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer", "value": 95.55, "name": "QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer", "value": 70.38, "name": "QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer", "value": 70.1, "name": "QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer", "value": 70.67, "name": "QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer", "value": 92.84, "name": "QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_recall_bertscore_question_answer_generation_gold_answer", "value": 92.95, "name": "QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_precision_bertscore_question_answer_generation_gold_answer", "value": 92.75, "name": "QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer", "value": 64.24, "name": "QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_recall_moverscore_question_answer_generation_gold_answer", "value": 64.11, "name": "QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_precision_moverscore_question_answer_generation_gold_answer", "value": 64.46, "name": "QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer]"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "amazon", "args": "amazon"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.05824165264328302, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.23816054441894524, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.2126541577267873, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.9049284884636415, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.6026811246610306, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "new_wiki", "args": "new_wiki"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.10732253983426589, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.2843539251435107, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.26233713078026283, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.9307303692241476, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.656720781293701, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "nyt", "args": "nyt"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.07645313983751752, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.2390325229516282, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.244330483594333, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.9235989114144583, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.6368628469746445, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "reddit", "args": "reddit"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.053789810023704955, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.2141155595451475, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.20395821936787215, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.905714302466044, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.6013927660089013, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "books", "args": "books"}, "metrics": [{"type": "bleu4_question_generation", "value": 1.4952813458186383e-10, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.10769136267285535, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.11520101781020654, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8774975922095214, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5520873074919223, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "electronics", "args": "electronics"}, "metrics": [{"type": "bleu4_question_generation", "value": 1.3766381900873328e-06, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.14287460464803423, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.14866637711177003, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8759880110997111, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5607199201429516, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "grocery", "args": "grocery"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.006003840641121225, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.1248840598199836, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.1553374628831024, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8737966828346252, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5662545638649026, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "movies", "args": "movies"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.0108258720771249, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.1389815289507374, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.12855849168399078, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8773110466344016, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5555164603510797, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "restaurants", "args": "restaurants"}, "metrics": [{"type": "bleu4_question_generation", "value": 1.7873892359263582e-10, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.12160976589996819, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.1146979295288459, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8771339668070569, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5490739019998478, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "tripadvisor", "args": "tripadvisor"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.010174680918435602, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.1341425139885307, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.1391725168440533, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8877592491739579, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5590591813016728, "name": "MoverScore (Question Generation)"}]}]}]} | lmqg/bart-base-squad-qg | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `research-backup/bart-large-squad-qg-default`
This model is fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without parameter search (default configuration is taken from [ERNIE-GEN](https://arxiv.org/abs/2001.11314)).
### Overview
- **Language model:** [facebook/bart-large](https://huggingface.co/facebook/bart-large)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/bart-large-squad-qg-default")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/bart-large-squad-qg-default")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-large-squad-qg-default/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.95 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 56.25 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 40.27 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 30.71 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 23.94 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 25.91 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 64.42 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 52.2 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: None
- model: facebook/bart-large
- max_length: 512
- max_length_output: 32
- epoch: 10
- batch: 8
- lr: 1.25e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.1
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-large-squad-qg-default/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 1"}, {"text": "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 2"}, {"text": "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> .", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "research-backup/bart-large-squad-qg-default", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 23.94, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 52.2, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 25.91, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.95, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 64.42, "name": "MoverScore (Question Generation)"}]}]}]} | research-backup/bart-large-squad-qg-default | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2001.11314",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `research-backup/bart-large-squad-qg-no-answer`
This model is fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without answer information, i.e. generate a question only given a paragraph (note that normal model is fine-tuned to generate a question given a pargraph and an associated answer in the paragraph).
### Overview
- **Language model:** [facebook/bart-large](https://huggingface.co/facebook/bart-large)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/bart-large-squad-qg-no-answer")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/bart-large-squad-qg-no-answer")
output = pipe("<hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl>")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-large-squad-qg-no-answer/raw/main/eval/metric.first.sentence.paragraph_sentence.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.28 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 55.38 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 39.27 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 29.99 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 23.47 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 24.94 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 63.28 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 50.25 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_sentence']
- output_types: ['question']
- prefix_types: None
- model: facebook/bart-large
- max_length: 512
- max_length_output: 32
- epoch: 4
- batch: 32
- lr: 5e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-large-squad-qg-no-answer/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "<hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl>", "example_title": "Question Generation Example 1"}, {"text": "<hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl>", "example_title": "Question Generation Example 2"}, {"text": "<hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records . <hl>", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "research-backup/bart-large-squad-qg-no-answer", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 23.47, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 50.25, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 24.94, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.28, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 63.28, "name": "MoverScore (Question Generation)"}]}]}]} | research-backup/bart-large-squad-qg-no-answer | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `research-backup/bart-large-squad-qg-no-paragraph`
This model is fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without pargraph information but only the sentence that contains the answer.
### Overview
- **Language model:** [facebook/bart-large](https://huggingface.co/facebook/bart-large)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/bart-large-squad-qg-no-paragraph")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/bart-large-squad-qg-no-paragraph")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-large-squad-qg-no-paragraph/raw/main/eval/metric.first.sentence.sentence_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.7 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 55.85 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 39.85 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 30.44 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 23.86 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 25.18 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 63.85 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 51.43 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['sentence_answer']
- output_types: ['question']
- prefix_types: None
- model: facebook/bart-large
- max_length: 128
- max_length_output: 32
- epoch: 8
- batch: 32
- lr: 5e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 16
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-large-squad-qg-no-paragraph/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 1"}, {"text": "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 2"}, {"text": "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> .", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "research-backup/bart-large-squad-qg-no-paragraph", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 23.86, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 51.43, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 25.18, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.7, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 63.85, "name": "MoverScore (Question Generation)"}]}]}]} | research-backup/bart-large-squad-qg-no-paragraph | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `lmqg/bart-large-squad-qg`
This model is fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [facebook/bart-large](https://huggingface.co/facebook/bart-large)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/bart-large-squad-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/bart-large-squad-qg")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 91 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 58.79 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 42.79 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 33.11 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 26.17 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 27.07 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 64.99 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 53.85 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 95.54 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedF1Score (MoverScore) | 70.82 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (BERTScore) | 95.59 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (MoverScore) | 71.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (BERTScore) | 95.49 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (MoverScore) | 70.54 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/bart-large-squad-ae`](https://huggingface.co/lmqg/bart-large-squad-ae). [raw metric file](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.lmqg_bart-large-squad-ae.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 93.23 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedF1Score (MoverScore) | 64.76 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (BERTScore) | 93.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (MoverScore) | 64.98 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (BERTScore) | 93.35 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (MoverScore) | 64.63 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metrics (Question Generation, Out-of-Domain)***
| Dataset | Type | BERTScore| Bleu_4 | METEOR | MoverScore | ROUGE_L | Link |
|:--------|:-----|---------:|-------:|-------:|-----------:|--------:|-----:|
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | amazon | 90.93 | 6.53 | 22.3 | 60.87 | 25.03 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.amazon.json) |
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | new_wiki | 93.23 | 11.12 | 27.32 | 66.23 | 29.68 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.new_wiki.json) |
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | nyt | 92.49 | 8.12 | 25.25 | 64.06 | 25.29 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.nyt.json) |
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | reddit | 90.95 | 5.95 | 21.5 | 60.59 | 22.37 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.reddit.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | books | 88.07 | 0.63 | 11.58 | 55.56 | 12.37 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.books.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | electronics | 87.83 | 0.87 | 15.35 | 56.35 | 16.02 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.electronics.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | grocery | 87.79 | 0.53 | 15.13 | 57.02 | 12.34 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.grocery.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | movies | 87.49 | 0.0 | 11.86 | 55.29 | 12.51 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.movies.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | restaurants | 87.98 | 0.0 | 12.42 | 55.43 | 13.08 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.restaurants.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | tripadvisor | 88.91 | 0.0 | 13.72 | 56.05 | 14.03 | [link](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.tripadvisor.json) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: None
- model: facebook/bart-large
- max_length: 512
- max_length_output: 32
- epoch: 4
- batch: 32
- lr: 5e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-large-squad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 1"}, {"text": "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 2"}, {"text": "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> .", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "lmqg/bart-large-squad-qg", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 26.17, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 53.85, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 27.07, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 91.0, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 64.99, "name": "MoverScore (Question Generation)"}, {"type": "qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer", "value": 95.54, "name": "QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer", "value": 95.49, "name": "QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer", "value": 95.59, "name": "QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer", "value": 70.82, "name": "QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer", "value": 70.54, "name": "QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer", "value": 71.13, "name": "QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer", "value": 93.23, "name": "QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_recall_bertscore_question_answer_generation_gold_answer", "value": 93.35, "name": "QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_precision_bertscore_question_answer_generation_gold_answer", "value": 93.13, "name": "QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer", "value": 64.76, "name": "QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_recall_moverscore_question_answer_generation_gold_answer", "value": 64.63, "name": "QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_precision_moverscore_question_answer_generation_gold_answer", "value": 64.98, "name": "QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer]"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "amazon", "args": "amazon"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.06530369842068952, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.25030985091008146, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.2229994442645732, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.9092814804525936, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.6086538514008419, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "new_wiki", "args": "new_wiki"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.11118273173452982, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.2967546690273089, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.27315087810722966, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.9322739617807421, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.6623000084761579, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "nyt", "args": "nyt"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.08117757543966063, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.25292097720734297, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.25254205113198686, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.9249009759439454, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.6406329128556304, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "reddit", "args": "reddit"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.059525104157825456, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.22365090580055863, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.21499800504546457, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.9095144685254328, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.6059332247878408, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "books", "args": "books"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.006278914808207679, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.12368226019088967, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.11576293675813865, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8807110440044503, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5555905941686486, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "electronics", "args": "electronics"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.00866799444965211, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.1601628874804186, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.15348605312210778, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8783386920680519, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5634845371093992, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "grocery", "args": "grocery"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.00528043272450429, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.12343711316491492, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.15133496445452477, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8778951253890991, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5701949938103265, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "movies", "args": "movies"}, "metrics": [{"type": "bleu4_question_generation", "value": 1.0121579426501661e-06, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.12508697028506718, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.11862284941640638, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8748829724726739, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5528899173535703, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "restaurants", "args": "restaurants"}, "metrics": [{"type": "bleu4_question_generation", "value": 1.1301750984972448e-06, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.13083168975354642, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.12419733006916912, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8797711839570719, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5542757411268555, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "tripadvisor", "args": "tripadvisor"}, "metrics": [{"type": "bleu4_question_generation", "value": 8.380171318718442e-07, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.1402922852924756, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.1372146070365174, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8891002409937424, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5604572211470809, "name": "MoverScore (Question Generation)"}]}]}]} | lmqg/bart-large-squad-qg | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `lmqg/mt5-small-jaquad-qg-ae`
This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question generation and answer extraction jointly on the [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small)
- **Language:** ja
- **Training data:** [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="ja", model="lmqg/mt5-small-jaquad-qg-ae")
# model prediction
question_answer_pairs = model.generate_qa("フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め30数点しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mt5-small-jaquad-qg-ae")
# answer extraction
answer = pipe("generate question: ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている<hl>6月28日<hl>は2人の14回目の結婚記念日であった。")
# question generation
question = pipe("extract answers: 『クマのプーさん』の物語はまず1925年12月24日、『イヴニング・ニュース』紙のクリスマス特集号に短編作品として掲載された。これは『クマのプーさん』の第一章にあたる作品で、このときだけは挿絵をJ.H.ダウドがつけている。その後作品10話と挿絵が整い、刊行に先駆けて「イーヨーの誕生日」のエピソードが1926年8月に『ロイヤルマガジン』に、同年10月9日に『ニューヨーク・イヴニング・ポスト』紙に掲載されたあと、同年10月14日にロンドンで(メシュエン社)、21日にニューヨークで(ダットン社)『クマのプーさん』が刊行された。<hl>前著『ぼくたちがとてもちいさかったころ』がすでに大きな成功を収めていたこともあり、イギリスでは初版は前著の7倍に当たる3万5000部が刷られた。<hl>他方のアメリカでもその年の終わりまでに15万部を売り上げている。ただし依然として人気のあった前著を売り上げで追い越すには数年の時間を要した。")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-jaquad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_jaquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 81.64 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_1 | 56.94 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_2 | 45.23 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_3 | 37.37 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_4 | 31.55 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| METEOR | 29.64 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| MoverScore | 59.42 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| ROUGE_L | 52.58 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-jaquad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_jaquad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 80.51 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| QAAlignedF1Score (MoverScore) | 56.28 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| QAAlignedPrecision (BERTScore) | 80.51 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| QAAlignedPrecision (MoverScore) | 56.28 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| QAAlignedRecall (BERTScore) | 80.51 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| QAAlignedRecall (MoverScore) | 56.28 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
- ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-jaquad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_jaquad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 29.55 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| AnswerF1Score | 29.55 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| BERTScore | 78.12 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_1 | 34.96 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_2 | 31.92 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_3 | 29.49 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_4 | 27.55 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| METEOR | 26.22 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| MoverScore | 65.68 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| ROUGE_L | 36.63 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_jaquad
- dataset_name: default
- input_types: ['paragraph_answer', 'paragraph_sentence']
- output_types: ['question', 'answer']
- prefix_types: ['qg', 'ae']
- model: google/mt5-small
- max_length: 512
- max_length_output: 32
- epoch: 24
- batch: 64
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 1
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-jaquad-qg-ae/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "ja", "license": "cc-by-4.0", "tags": ["question generation", "answer extraction"], "datasets": ["lmqg/qg_jaquad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "generate question: \u30be\u30d5\u30a3\u30fc\u306f\u8cb4\u65cf\u51fa\u8eab\u3067\u306f\u3042\u3063\u305f\u304c\u738b\u65cf\u51fa\u8eab\u3067\u306f\u306a\u304f\u3001\u30cf\u30d7\u30b9\u30d6\u30eb\u30af\u5bb6\u306e\u7687\u4f4d\u7d99\u627f\u8005\u3067\u3042\u308b\u30d5\u30e9\u30f3\u30c4\u30fb\u30d5\u30a7\u30eb\u30c7\u30a3\u30ca\u30f3\u30c8\u3068\u306e\u7d50\u5a5a\u306f\u8cb4\u8ce4\u7d50\u5a5a\u3068\u306a\u3063\u305f\u3002\u7687\u5e1d\u30d5\u30e9\u30f3\u30c4\u30fb\u30e8\u30fc\u30bc\u30d5\u306f\u30012\u4eba\u306e\u9593\u306b\u751f\u307e\u308c\u305f\u5b50\u5b6b\u304c\u7687\u4f4d\u3092\u7d99\u304c\u306a\u3044\u3053\u3068\u3092\u6761\u4ef6\u3068\u3057\u3066\u7d50\u5a5a\u3092\u627f\u8a8d\u3057\u3066\u3044\u305f\u3002\u8996\u5bdf\u304c\u4e88\u5b9a\u3055\u308c\u3066\u3044\u308b<hl>6\u670828\u65e5<hl>\u306f2\u4eba\u306e14\u56de\u76ee\u306e\u7d50\u5a5a\u8a18\u5ff5\u65e5\u3067\u3042\u3063\u305f\u3002", "example_title": "Question Generation Example 1"}, {"text": "generate question: \u300e\u30af\u30de\u306e\u30d7\u30fc\u3055\u3093\u300f\u306e\u7269\u8a9e\u306f\u307e\u305a1925\u5e7412\u670824\u65e5\u3001\u300e\u30a4\u30f4\u30cb\u30f3\u30b0\u30fb\u30cb\u30e5\u30fc\u30b9\u300f\u7d19\u306e\u30af\u30ea\u30b9\u30de\u30b9\u7279\u96c6\u53f7\u306b\u77ed\u7de8\u4f5c\u54c1\u3068\u3057\u3066\u63b2\u8f09\u3055\u308c\u305f\u3002\u3053\u308c\u306f\u300e\u30af\u30de\u306e\u30d7\u30fc\u3055\u3093\u300f\u306e\u7b2c\u4e00\u7ae0\u306b\u3042\u305f\u308b\u4f5c\u54c1\u3067\u3001\u3053\u306e\u3068\u304d\u3060\u3051\u306f\u633f\u7d75\u3092J.H.\u30c0\u30a6\u30c9\u304c\u3064\u3051\u3066\u3044\u308b\u3002\u305d\u306e\u5f8c\u4f5c\u54c110\u8a71\u3068\u633f\u7d75\u304c\u6574\u3044\u3001\u520a\u884c\u306b\u5148\u99c6\u3051\u3066\u300c\u30a4\u30fc\u30e8\u30fc\u306e\u8a95\u751f\u65e5\u300d\u306e\u30a8\u30d4\u30bd\u30fc\u30c9\u304c1926\u5e748\u6708\u306b\u300e\u30ed\u30a4\u30e4\u30eb\u30de\u30ac\u30b8\u30f3\u300f\u306b\u3001\u540c\u5e7410\u67089\u65e5\u306b\u300e\u30cb\u30e5\u30fc\u30e8\u30fc\u30af\u30fb\u30a4\u30f4\u30cb\u30f3\u30b0\u30fb\u30dd\u30b9\u30c8\u300f\u7d19\u306b\u63b2\u8f09\u3055\u308c\u305f\u3042\u3068\u3001\u540c\u5e7410\u670814\u65e5\u306b\u30ed\u30f3\u30c9\u30f3\u3067(\u30e1\u30b7\u30e5\u30a8\u30f3\u793e)\u300121\u65e5\u306b\u30cb\u30e5\u30fc\u30e8\u30fc\u30af\u3067(\u30c0\u30c3\u30c8\u30f3\u793e)\u300e\u30af\u30de\u306e\u30d7\u30fc\u3055\u3093\u300f\u304c\u520a\u884c\u3055\u308c\u305f\u3002\u524d\u8457\u300e\u307c\u304f\u305f\u3061\u304c\u3068\u3066\u3082\u3061\u3044\u3055\u304b\u3063\u305f\u3053\u308d\u300f\u304c\u3059\u3067\u306b\u5927\u304d\u306a\u6210\u529f\u3092\u53ce\u3081\u3066\u3044\u305f\u3053\u3068\u3082\u3042\u308a\u3001\u30a4\u30ae\u30ea\u30b9\u3067\u306f\u521d\u7248\u306f\u524d\u8457\u306e7\u500d\u306b\u5f53\u305f\u308b<hl>3\u4e075000\u90e8<hl>\u304c\u5237\u3089\u308c\u305f\u3002\u4ed6\u65b9\u306e\u30a2\u30e1\u30ea\u30ab\u3067\u3082\u305d\u306e\u5e74\u306e\u7d42\u308f\u308a\u307e\u3067\u306b15\u4e07\u90e8\u3092\u58f2\u308a\u4e0a\u3052\u3066\u3044\u308b\u3002\u305f\u3060\u3057\u4f9d\u7136\u3068\u3057\u3066\u4eba\u6c17\u306e\u3042\u3063\u305f\u524d\u8457\u3092\u58f2\u308a\u4e0a\u3052\u3067\u8ffd\u3044\u8d8a\u3059\u306b\u306f\u6570\u5e74\u306e\u6642\u9593\u3092\u8981\u3057\u305f\u3002", "example_title": "Question Generation Example 2"}, {"text": "generate question: \u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u306e\u4f5c\u54c1\u3067\u306f\u300117\u4e16\u7d00\u306e\u30aa\u30e9\u30f3\u30c0\u306e\u753b\u5bb6\u3001\u30e8\u30cf\u30cd\u30b9\u30fb\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u306e\u4f5c\u54c1\u306b\u3064\u3044\u3066\u8a18\u8ff0\u3059\u308b\u3002\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u306e\u4f5c\u54c1\u306f\u3001\u7591\u554f\u4f5c\u3082\u542b\u3081<hl>30\u6570\u70b9<hl>\u3057\u304b\u73fe\u5b58\u3057\u306a\u3044\u3002\u73fe\u5b58\u4f5c\u54c1\u306f\u3059\u3079\u3066\u6cb9\u5f69\u753b\u3067\u3001\u7248\u753b\u3001\u4e0b\u7d75\u3001\u7d20\u63cf\u306a\u3069\u306f\u6b8b\u3063\u3066\u3044\u306a\u3044\u3002\u4ee5\u4e0b\u306b\u306f\u82e5\u5e72\u306e\u7591\u554f\u4f5c\u3082\u542b\u3081\u300137\u70b9\u306e\u57fa\u672c\u60c5\u5831\u3092\u8a18\u8f09\u3057\u3001\u5404\u4f5c\u54c1\u306b\u3064\u3044\u3066\u7565\u8aac\u3059\u308b\u3002\u53ce\u9332\u9806\u5e8f\u3001\u63a8\u5b9a\u5236\u4f5c\u5e74\u4ee3\u306f\u300e\u300c\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u3068\u305d\u306e\u6642\u4ee3\u5c55\u300d\u56f3\u9332\u300f\u306b\u3088\u308b\u3002\u65e5\u672c\u8a9e\u306e\u4f5c\u54c1\u30bf\u30a4\u30c8\u30eb\u306b\u3064\u3044\u3066\u306f\u3001\u4e0a\u63b2\u56f3\u9332\u306e\u307b\u304b\u3001\u300e\u300c\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u5c55\u300d\u56f3\u9332\u300f\u3001\u300e\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u751f\u6daf\u3068\u4f5c\u54c1\u300f\u306b\u3088\u308b\u3002\u4fbf\u5b9c\u4e0a\u300c1650\u5e74\u4ee3\u306e\u4f5c\u54c1\u300d\u300c1660\u5e74\u4ee3\u306e\u4f5c\u54c1\u300d\u300c1670\u5e74\u4ee3\u306e\u4f5c\u54c1\u300d\u306e3\u3064\u306e\u7bc0\u3092\u8a2d\u3051\u305f\u304c\u3001\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u306e\u4f5c\u54c1\u306b\u306f\u5236\u4f5c\u5e74\u4ee3\u4e0d\u660e\u306e\u3082\u306e\u304c\u591a\u304f\u3001\u63a8\u5b9a\u5236\u4f5c\u5e74\u4ee3\u306b\u3064\u3044\u3066\u306f\u7814\u7a76\u8005\u3084\u6587\u732e\u306b\u3088\u3063\u3066\u82e5\u5e72\u306e\u5dee\u304c\u3042\u308b\u3002", "example_title": "Question Generation Example 3"}, {"text": "extract answers: \u300e\u30af\u30de\u306e\u30d7\u30fc\u3055\u3093\u300f\u306e\u7269\u8a9e\u306f\u307e\u305a1925\u5e7412\u670824\u65e5\u3001\u300e\u30a4\u30f4\u30cb\u30f3\u30b0\u30fb\u30cb\u30e5\u30fc\u30b9\u300f\u7d19\u306e\u30af\u30ea\u30b9\u30de\u30b9\u7279\u96c6\u53f7\u306b\u77ed\u7de8\u4f5c\u54c1\u3068\u3057\u3066\u63b2\u8f09\u3055\u308c\u305f\u3002\u3053\u308c\u306f\u300e\u30af\u30de\u306e\u30d7\u30fc\u3055\u3093\u300f\u306e\u7b2c\u4e00\u7ae0\u306b\u3042\u305f\u308b\u4f5c\u54c1\u3067\u3001\u3053\u306e\u3068\u304d\u3060\u3051\u306f\u633f\u7d75\u3092J.H.\u30c0\u30a6\u30c9\u304c\u3064\u3051\u3066\u3044\u308b\u3002\u305d\u306e\u5f8c\u4f5c\u54c110\u8a71\u3068\u633f\u7d75\u304c\u6574\u3044\u3001\u520a\u884c\u306b\u5148\u99c6\u3051\u3066\u300c\u30a4\u30fc\u30e8\u30fc\u306e\u8a95\u751f\u65e5\u300d\u306e\u30a8\u30d4\u30bd\u30fc\u30c9\u304c1926\u5e748\u6708\u306b\u300e\u30ed\u30a4\u30e4\u30eb\u30de\u30ac\u30b8\u30f3\u300f\u306b\u3001\u540c\u5e7410\u67089\u65e5\u306b\u300e\u30cb\u30e5\u30fc\u30e8\u30fc\u30af\u30fb\u30a4\u30f4\u30cb\u30f3\u30b0\u30fb\u30dd\u30b9\u30c8\u300f\u7d19\u306b\u63b2\u8f09\u3055\u308c\u305f\u3042\u3068\u3001\u540c\u5e7410\u670814\u65e5\u306b\u30ed\u30f3\u30c9\u30f3\u3067(\u30e1\u30b7\u30e5\u30a8\u30f3\u793e)\u300121\u65e5\u306b\u30cb\u30e5\u30fc\u30e8\u30fc\u30af\u3067(\u30c0\u30c3\u30c8\u30f3\u793e)\u300e\u30af\u30de\u306e\u30d7\u30fc\u3055\u3093\u300f\u304c\u520a\u884c\u3055\u308c\u305f\u3002<hl>\u524d\u8457\u300e\u307c\u304f\u305f\u3061\u304c\u3068\u3066\u3082\u3061\u3044\u3055\u304b\u3063\u305f\u3053\u308d\u300f\u304c\u3059\u3067\u306b\u5927\u304d\u306a\u6210\u529f\u3092\u53ce\u3081\u3066\u3044\u305f\u3053\u3068\u3082\u3042\u308a\u3001\u30a4\u30ae\u30ea\u30b9\u3067\u306f\u521d\u7248\u306f\u524d\u8457\u306e7\u500d\u306b\u5f53\u305f\u308b3\u4e075000\u90e8\u304c\u5237\u3089\u308c\u305f\u3002<hl>\u4ed6\u65b9\u306e\u30a2\u30e1\u30ea\u30ab\u3067\u3082\u305d\u306e\u5e74\u306e\u7d42\u308f\u308a\u307e\u3067\u306b15\u4e07\u90e8\u3092\u58f2\u308a\u4e0a\u3052\u3066\u3044\u308b\u3002\u305f\u3060\u3057\u4f9d\u7136\u3068\u3057\u3066\u4eba\u6c17\u306e\u3042\u3063\u305f\u524d\u8457\u3092\u58f2\u308a\u4e0a\u3052\u3067\u8ffd\u3044\u8d8a\u3059\u306b\u306f\u6570\u5e74\u306e\u6642\u9593\u3092\u8981\u3057\u305f\u3002", "example_title": "Answer Extraction Example 1"}, {"text": "extract answers: \u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u306e\u4f5c\u54c1\u3067\u306f\u300117\u4e16\u7d00\u306e\u30aa\u30e9\u30f3\u30c0\u306e\u753b\u5bb6\u3001\u30e8\u30cf\u30cd\u30b9\u30fb\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u306e\u4f5c\u54c1\u306b\u3064\u3044\u3066\u8a18\u8ff0\u3059\u308b\u3002\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u306e\u4f5c\u54c1\u306f\u3001\u7591\u554f\u4f5c\u3082\u542b\u308130\u6570\u70b9\u3057\u304b\u73fe\u5b58\u3057\u306a\u3044\u3002<hl>\u73fe\u5b58\u4f5c\u54c1\u306f\u3059\u3079\u3066\u6cb9\u5f69\u753b\u3067\u3001\u7248\u753b\u3001\u4e0b\u7d75\u3001\u7d20\u63cf\u306a\u3069\u306f\u6b8b\u3063\u3066\u3044\u306a\u3044\u3002\u4ee5\u4e0b\u306b\u306f\u82e5\u5e72\u306e\u7591\u554f\u4f5c\u3082\u542b\u3081\u300137\u70b9\u306e\u57fa\u672c\u60c5\u5831\u3092\u8a18\u8f09\u3057\u3001\u5404\u4f5c\u54c1\u306b\u3064\u3044\u3066\u7565\u8aac\u3059\u308b\u3002<hl>\u53ce\u9332\u9806\u5e8f\u3001\u63a8\u5b9a\u5236\u4f5c\u5e74\u4ee3\u306f\u300e\u300c\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u3068\u305d\u306e\u6642\u4ee3\u5c55\u300d\u56f3\u9332\u300f\u306b\u3088\u308b\u3002\u65e5\u672c\u8a9e\u306e\u4f5c\u54c1\u30bf\u30a4\u30c8\u30eb\u306b\u3064\u3044\u3066\u306f\u3001\u4e0a\u63b2\u56f3\u9332\u306e\u307b\u304b\u3001\u300e\u300c\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u5c55\u300d\u56f3\u9332\u300f\u3001\u300e\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u751f\u6daf\u3068\u4f5c\u54c1\u300f\u306b\u3088\u308b\u3002\u4fbf\u5b9c\u4e0a\u300c1650\u5e74\u4ee3\u306e\u4f5c\u54c1\u300d\u300c1660\u5e74\u4ee3\u306e\u4f5c\u54c1\u300d\u300c1670\u5e74\u4ee3\u306e\u4f5c\u54c1\u300d\u306e3\u3064\u306e\u7bc0\u3092\u8a2d\u3051\u305f\u304c\u3001\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u306e\u4f5c\u54c1\u306b\u306f\u5236\u4f5c\u5e74\u4ee3\u4e0d\u660e\u306e\u3082\u306e\u304c\u591a\u304f\u3001\u63a8\u5b9a\u5236\u4f5c\u5e74\u4ee3\u306b\u3064\u3044\u3066\u306f\u7814\u7a76\u8005\u3084\u6587\u732e\u306b\u3088\u3063\u3066\u82e5\u5e72\u306e\u5dee\u304c\u3042\u308b\u3002", "example_title": "Answer Extraction Example 2"}], "model-index": [{"name": "lmqg/mt5-small-jaquad-qg-ae", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_jaquad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 31.55, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 52.58, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 29.64, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 81.64, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 59.42, "name": "MoverScore (Question Generation)"}, {"type": "qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer", "value": 80.51, "name": "QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer))"}, {"type": "qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer", "value": 80.51, "name": "QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer))"}, {"type": "qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer", "value": 80.51, "name": "QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer))"}, {"type": "qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer", "value": 56.28, "name": "QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer))"}, {"type": "qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer", "value": 56.28, "name": "QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer))"}, {"type": "qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer", "value": 56.28, "name": "QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer))"}, {"type": "bleu4_answer_extraction", "value": 27.55, "name": "BLEU4 (Answer Extraction)"}, {"type": "rouge_l_answer_extraction", "value": 36.63, "name": "ROUGE-L (Answer Extraction)"}, {"type": "meteor_answer_extraction", "value": 26.22, "name": "METEOR (Answer Extraction)"}, {"type": "bertscore_answer_extraction", "value": 78.12, "name": "BERTScore (Answer Extraction)"}, {"type": "moverscore_answer_extraction", "value": 65.68, "name": "MoverScore (Answer Extraction)"}, {"type": "answer_f1_score__answer_extraction", "value": 29.55, "name": "AnswerF1Score (Answer Extraction)"}, {"type": "answer_exact_match_answer_extraction", "value": 29.55, "name": "AnswerExactMatch (Answer Extraction)"}]}]}]} | lmqg/mt5-small-jaquad-qg-ae | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question generation",
"answer extraction",
"ja",
"dataset:lmqg/qg_jaquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `lmqg/mt5-small-jaquad-qg`
This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question generation task on the [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small)
- **Language:** ja
- **Training data:** [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="ja", model="lmqg/mt5-small-jaquad-qg")
# model prediction
questions = model.generate_q(list_context="フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め30数点しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。", list_answer="30数点")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mt5-small-jaquad-qg")
output = pipe("ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている<hl>6月28日<hl>は2人の14回目の結婚記念日であった。")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-jaquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_jaquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 80.87 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_1 | 56.34 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_2 | 44.28 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_3 | 36.31 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_4 | 30.49 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| METEOR | 29.03 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| MoverScore | 58.67 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| ROUGE_L | 50.88 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
- ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/mt5-small-jaquad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_jaquad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 86.07 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| QAAlignedF1Score (MoverScore) | 61.83 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| QAAlignedPrecision (BERTScore) | 86.08 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| QAAlignedPrecision (MoverScore) | 61.85 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| QAAlignedRecall (BERTScore) | 86.06 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| QAAlignedRecall (MoverScore) | 61.81 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
- ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mt5-small-jaquad-ae`](https://huggingface.co/lmqg/mt5-small-jaquad-ae). [raw metric file](https://huggingface.co/lmqg/mt5-small-jaquad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_jaquad.default.lmqg_mt5-small-jaquad-ae.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 79.78 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| QAAlignedF1Score (MoverScore) | 55.85 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| QAAlignedPrecision (BERTScore) | 76.84 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| QAAlignedPrecision (MoverScore) | 53.8 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| QAAlignedRecall (BERTScore) | 83.06 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| QAAlignedRecall (MoverScore) | 58.22 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_jaquad
- dataset_name: default
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: None
- model: google/mt5-small
- max_length: 512
- max_length_output: 32
- epoch: 21
- batch: 64
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 1
- label_smoothing: 0.0
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-jaquad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "ja", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_jaquad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "\u30be\u30d5\u30a3\u30fc\u306f\u8cb4\u65cf\u51fa\u8eab\u3067\u306f\u3042\u3063\u305f\u304c\u738b\u65cf\u51fa\u8eab\u3067\u306f\u306a\u304f\u3001\u30cf\u30d7\u30b9\u30d6\u30eb\u30af\u5bb6\u306e\u7687\u4f4d\u7d99\u627f\u8005\u3067\u3042\u308b\u30d5\u30e9\u30f3\u30c4\u30fb\u30d5\u30a7\u30eb\u30c7\u30a3\u30ca\u30f3\u30c8\u3068\u306e\u7d50\u5a5a\u306f\u8cb4\u8ce4\u7d50\u5a5a\u3068\u306a\u3063\u305f\u3002\u7687\u5e1d\u30d5\u30e9\u30f3\u30c4\u30fb\u30e8\u30fc\u30bc\u30d5\u306f\u30012\u4eba\u306e\u9593\u306b\u751f\u307e\u308c\u305f\u5b50\u5b6b\u304c\u7687\u4f4d\u3092\u7d99\u304c\u306a\u3044\u3053\u3068\u3092\u6761\u4ef6\u3068\u3057\u3066\u7d50\u5a5a\u3092\u627f\u8a8d\u3057\u3066\u3044\u305f\u3002\u8996\u5bdf\u304c\u4e88\u5b9a\u3055\u308c\u3066\u3044\u308b<hl>6\u670828\u65e5<hl>\u306f2\u4eba\u306e14\u56de\u76ee\u306e\u7d50\u5a5a\u8a18\u5ff5\u65e5\u3067\u3042\u3063\u305f\u3002", "example_title": "Question Generation Example 1"}, {"text": "\u300e\u30af\u30de\u306e\u30d7\u30fc\u3055\u3093\u300f\u306e\u7269\u8a9e\u306f\u307e\u305a1925\u5e7412\u670824\u65e5\u3001\u300e\u30a4\u30f4\u30cb\u30f3\u30b0\u30fb\u30cb\u30e5\u30fc\u30b9\u300f\u7d19\u306e\u30af\u30ea\u30b9\u30de\u30b9\u7279\u96c6\u53f7\u306b\u77ed\u7de8\u4f5c\u54c1\u3068\u3057\u3066\u63b2\u8f09\u3055\u308c\u305f\u3002\u3053\u308c\u306f\u300e\u30af\u30de\u306e\u30d7\u30fc\u3055\u3093\u300f\u306e\u7b2c\u4e00\u7ae0\u306b\u3042\u305f\u308b\u4f5c\u54c1\u3067\u3001\u3053\u306e\u3068\u304d\u3060\u3051\u306f\u633f\u7d75\u3092J.H.\u30c0\u30a6\u30c9\u304c\u3064\u3051\u3066\u3044\u308b\u3002\u305d\u306e\u5f8c\u4f5c\u54c110\u8a71\u3068\u633f\u7d75\u304c\u6574\u3044\u3001\u520a\u884c\u306b\u5148\u99c6\u3051\u3066\u300c\u30a4\u30fc\u30e8\u30fc\u306e\u8a95\u751f\u65e5\u300d\u306e\u30a8\u30d4\u30bd\u30fc\u30c9\u304c1926\u5e748\u6708\u306b\u300e\u30ed\u30a4\u30e4\u30eb\u30de\u30ac\u30b8\u30f3\u300f\u306b\u3001\u540c\u5e7410\u67089\u65e5\u306b\u300e\u30cb\u30e5\u30fc\u30e8\u30fc\u30af\u30fb\u30a4\u30f4\u30cb\u30f3\u30b0\u30fb\u30dd\u30b9\u30c8\u300f\u7d19\u306b\u63b2\u8f09\u3055\u308c\u305f\u3042\u3068\u3001\u540c\u5e7410\u670814\u65e5\u306b\u30ed\u30f3\u30c9\u30f3\u3067(\u30e1\u30b7\u30e5\u30a8\u30f3\u793e)\u300121\u65e5\u306b\u30cb\u30e5\u30fc\u30e8\u30fc\u30af\u3067(\u30c0\u30c3\u30c8\u30f3\u793e)\u300e\u30af\u30de\u306e\u30d7\u30fc\u3055\u3093\u300f\u304c\u520a\u884c\u3055\u308c\u305f\u3002\u524d\u8457\u300e\u307c\u304f\u305f\u3061\u304c\u3068\u3066\u3082\u3061\u3044\u3055\u304b\u3063\u305f\u3053\u308d\u300f\u304c\u3059\u3067\u306b\u5927\u304d\u306a\u6210\u529f\u3092\u53ce\u3081\u3066\u3044\u305f\u3053\u3068\u3082\u3042\u308a\u3001\u30a4\u30ae\u30ea\u30b9\u3067\u306f\u521d\u7248\u306f\u524d\u8457\u306e7\u500d\u306b\u5f53\u305f\u308b<hl>3\u4e075000\u90e8<hl>\u304c\u5237\u3089\u308c\u305f\u3002\u4ed6\u65b9\u306e\u30a2\u30e1\u30ea\u30ab\u3067\u3082\u305d\u306e\u5e74\u306e\u7d42\u308f\u308a\u307e\u3067\u306b15\u4e07\u90e8\u3092\u58f2\u308a\u4e0a\u3052\u3066\u3044\u308b\u3002\u305f\u3060\u3057\u4f9d\u7136\u3068\u3057\u3066\u4eba\u6c17\u306e\u3042\u3063\u305f\u524d\u8457\u3092\u58f2\u308a\u4e0a\u3052\u3067\u8ffd\u3044\u8d8a\u3059\u306b\u306f\u6570\u5e74\u306e\u6642\u9593\u3092\u8981\u3057\u305f\u3002", "example_title": "Question Generation Example 2"}, {"text": "\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u306e\u4f5c\u54c1\u3067\u306f\u300117\u4e16\u7d00\u306e\u30aa\u30e9\u30f3\u30c0\u306e\u753b\u5bb6\u3001\u30e8\u30cf\u30cd\u30b9\u30fb\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u306e\u4f5c\u54c1\u306b\u3064\u3044\u3066\u8a18\u8ff0\u3059\u308b\u3002\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u306e\u4f5c\u54c1\u306f\u3001\u7591\u554f\u4f5c\u3082\u542b\u3081<hl>30\u6570\u70b9<hl>\u3057\u304b\u73fe\u5b58\u3057\u306a\u3044\u3002\u73fe\u5b58\u4f5c\u54c1\u306f\u3059\u3079\u3066\u6cb9\u5f69\u753b\u3067\u3001\u7248\u753b\u3001\u4e0b\u7d75\u3001\u7d20\u63cf\u306a\u3069\u306f\u6b8b\u3063\u3066\u3044\u306a\u3044\u3002\u4ee5\u4e0b\u306b\u306f\u82e5\u5e72\u306e\u7591\u554f\u4f5c\u3082\u542b\u3081\u300137\u70b9\u306e\u57fa\u672c\u60c5\u5831\u3092\u8a18\u8f09\u3057\u3001\u5404\u4f5c\u54c1\u306b\u3064\u3044\u3066\u7565\u8aac\u3059\u308b\u3002\u53ce\u9332\u9806\u5e8f\u3001\u63a8\u5b9a\u5236\u4f5c\u5e74\u4ee3\u306f\u300e\u300c\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u3068\u305d\u306e\u6642\u4ee3\u5c55\u300d\u56f3\u9332\u300f\u306b\u3088\u308b\u3002\u65e5\u672c\u8a9e\u306e\u4f5c\u54c1\u30bf\u30a4\u30c8\u30eb\u306b\u3064\u3044\u3066\u306f\u3001\u4e0a\u63b2\u56f3\u9332\u306e\u307b\u304b\u3001\u300e\u300c\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u5c55\u300d\u56f3\u9332\u300f\u3001\u300e\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u751f\u6daf\u3068\u4f5c\u54c1\u300f\u306b\u3088\u308b\u3002\u4fbf\u5b9c\u4e0a\u300c1650\u5e74\u4ee3\u306e\u4f5c\u54c1\u300d\u300c1660\u5e74\u4ee3\u306e\u4f5c\u54c1\u300d\u300c1670\u5e74\u4ee3\u306e\u4f5c\u54c1\u300d\u306e3\u3064\u306e\u7bc0\u3092\u8a2d\u3051\u305f\u304c\u3001\u30d5\u30a7\u30eb\u30e1\u30fc\u30eb\u306e\u4f5c\u54c1\u306b\u306f\u5236\u4f5c\u5e74\u4ee3\u4e0d\u660e\u306e\u3082\u306e\u304c\u591a\u304f\u3001\u63a8\u5b9a\u5236\u4f5c\u5e74\u4ee3\u306b\u3064\u3044\u3066\u306f\u7814\u7a76\u8005\u3084\u6587\u732e\u306b\u3088\u3063\u3066\u82e5\u5e72\u306e\u5dee\u304c\u3042\u308b\u3002", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "lmqg/mt5-small-jaquad-qg", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_jaquad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 30.49, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 50.88, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 29.03, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 80.87, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 58.67, "name": "MoverScore (Question Generation)"}, {"type": "qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer", "value": 86.07, "name": "QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer", "value": 86.06, "name": "QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer", "value": 86.08, "name": "QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer", "value": 61.83, "name": "QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer", "value": 61.81, "name": "QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer", "value": 61.85, "name": "QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer", "value": 79.78, "name": "QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_recall_bertscore_question_answer_generation_gold_answer", "value": 83.06, "name": "QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_precision_bertscore_question_answer_generation_gold_answer", "value": 76.84, "name": "QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer", "value": 55.85, "name": "QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_recall_moverscore_question_answer_generation_gold_answer", "value": 58.22, "name": "QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_precision_moverscore_question_answer_generation_gold_answer", "value": 53.8, "name": "QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer]"}]}]}]} | lmqg/mt5-small-jaquad-qg | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question generation",
"ja",
"dataset:lmqg/qg_jaquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `research-backup/t5-base-squad-qg-default`
This model is fine-tuned version of [t5-base](https://huggingface.co/t5-base) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without parameter search (default configuration is taken from [ERNIE-GEN](https://arxiv.org/abs/2001.11314)).
### Overview
- **Language model:** [t5-base](https://huggingface.co/t5-base)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/t5-base-squad-qg-default")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/t5-base-squad-qg-default")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-base-squad-qg-default/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.74 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 57.68 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 41.74 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 32.17 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 25.41 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 26.58 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 64.46 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 52.75 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: ['qg']
- model: t5-base
- max_length: 512
- max_length_output: 32
- epoch: 10
- batch: 32
- lr: 1.25e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.1
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-base-squad-qg-default/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 1"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 2"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> .", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "research-backup/t5-base-squad-qg-default", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 25.41, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 52.75, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 26.58, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.74, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 64.46, "name": "MoverScore (Question Generation)"}]}]}]} | research-backup/t5-base-squad-qg-default | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2001.11314",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `lmqg/t5-base-squad-qg-ae`
This model is fine-tuned version of [t5-base](https://huggingface.co/t5-base) for question generation and answer extraction jointly on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [t5-base](https://huggingface.co/t5-base)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/t5-base-squad-qg-ae")
# model prediction
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/t5-base-squad-qg-ae")
# answer extraction
answer = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
# question generation
question = pipe("extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-base-squad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.58 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 58.59 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 42.6 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 32.91 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 26.01 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 27 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 64.72 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 53.4 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-base-squad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 92.53 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedF1Score (MoverScore) | 64.23 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (BERTScore) | 92.35 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (MoverScore) | 64.33 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (BERTScore) | 92.74 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (MoverScore) | 64.23 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/t5-base-squad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:---------------------------------------------------------------|
| AnswerExactMatch | 58.9 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| AnswerF1Score | 70.18 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| BERTScore | 91.57 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 56.96 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 52.57 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 48.21 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 44.33 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 43.94 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 82.16 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 69.62 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_answer', 'paragraph_sentence']
- output_types: ['question', 'answer']
- prefix_types: ['qg', 'ae']
- model: t5-base
- max_length: 512
- max_length_output: 32
- epoch: 6
- batch: 32
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-base-squad-qg-ae/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation", "answer extraction"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 1"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 2"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> .", "example_title": "Question Generation Example 3"}, {"text": "extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.", "example_title": "Answer Extraction Example 1"}, {"text": "extract answers: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress. <hl>", "example_title": "Answer Extraction Example 2"}], "model-index": [{"name": "lmqg/t5-base-squad-qg-ae", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 26.01, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 53.4, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 27.0, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.58, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 64.72, "name": "MoverScore (Question Generation)"}, {"type": "qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer", "value": 92.53, "name": "QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer))"}, {"type": "qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer", "value": 92.74, "name": "QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer))"}, {"type": "qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer", "value": 92.35, "name": "QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer))"}, {"type": "qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer", "value": 64.23, "name": "QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer))"}, {"type": "qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer", "value": 64.23, "name": "QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer))"}, {"type": "qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer", "value": 64.33, "name": "QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer))"}, {"type": "bleu4_answer_extraction", "value": 44.33, "name": "BLEU4 (Answer Extraction)"}, {"type": "rouge_l_answer_extraction", "value": 69.62, "name": "ROUGE-L (Answer Extraction)"}, {"type": "meteor_answer_extraction", "value": 43.94, "name": "METEOR (Answer Extraction)"}, {"type": "bertscore_answer_extraction", "value": 91.57, "name": "BERTScore (Answer Extraction)"}, {"type": "moverscore_answer_extraction", "value": 82.16, "name": "MoverScore (Answer Extraction)"}, {"type": "answer_f1_score__answer_extraction", "value": 70.18, "name": "AnswerF1Score (Answer Extraction)"}, {"type": "answer_exact_match_answer_extraction", "value": 58.9, "name": "AnswerExactMatch (Answer Extraction)"}]}]}]} | lmqg/t5-base-squad-qg-ae | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question generation",
"answer extraction",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `research-backup/t5-base-squad-qg-no-answer`
This model is fine-tuned version of [t5-base](https://huggingface.co/t5-base) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without answer information, i.e. generate a question only given a paragraph (note that normal model is fine-tuned to generate a question given a pargraph and an associated answer in the paragraph).
### Overview
- **Language model:** [t5-base](https://huggingface.co/t5-base)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/t5-base-squad-qg-no-answer")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/t5-base-squad-qg-no-answer")
output = pipe("generate question: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl>")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-base-squad-qg-no-answer/raw/main/eval/metric.first.sentence.paragraph_sentence.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.03 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 54.77 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 38.54 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 29.28 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 22.86 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 24.52 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 62.99 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 49.51 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_sentence']
- output_types: ['question']
- prefix_types: ['qg']
- model: t5-base
- max_length: 512
- max_length_output: 32
- epoch: 8
- batch: 16
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-base-squad-qg-no-answer/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "generate question: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl>", "example_title": "Question Generation Example 1"}, {"text": "generate question: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl>", "example_title": "Question Generation Example 2"}, {"text": "generate question: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records . <hl>", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "research-backup/t5-base-squad-qg-no-answer", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 22.86, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 49.51, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 24.52, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.03, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 62.99, "name": "MoverScore (Question Generation)"}]}]}]} | research-backup/t5-base-squad-qg-no-answer | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `research-backup/t5-base-squad-qg-no-paragraph`
This model is fine-tuned version of [t5-base](https://huggingface.co/t5-base) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without pargraph information but only the sentence that contains the answer.
### Overview
- **Language model:** [t5-base](https://huggingface.co/t5-base)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/t5-base-squad-qg-no-paragraph")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/t5-base-squad-qg-no-paragraph")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-base-squad-qg-no-paragraph/raw/main/eval/metric.first.sentence.sentence_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.73 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 56.89 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 40.62 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 31.05 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 24.33 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 25.81 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 64 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 51.81 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['sentence_answer']
- output_types: ['question']
- prefix_types: ['qg']
- model: t5-base
- max_length: 128
- max_length_output: 32
- epoch: 8
- batch: 64
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 1
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-base-squad-qg-no-paragraph/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 1"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 2"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> .", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "research-backup/t5-base-squad-qg-no-paragraph", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 24.33, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 51.81, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 25.81, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.73, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 64.0, "name": "MoverScore (Question Generation)"}]}]}]} | research-backup/t5-base-squad-qg-no-paragraph | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `lmqg/t5-base-squad-qg`
This model is fine-tuned version of [t5-base](https://huggingface.co/t5-base) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [t5-base](https://huggingface.co/t5-base)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/t5-base-squad-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/t5-base-squad-qg")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-base-squad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.6 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 58.69 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 42.66 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 32.99 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 26.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 26.97 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 64.74 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 53.33 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/t5-base-squad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 95.42 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedF1Score (MoverScore) | 70.63 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (BERTScore) | 95.48 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (MoverScore) | 70.92 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (BERTScore) | 95.37 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (MoverScore) | 70.34 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/t5-base-squad-ae`](https://huggingface.co/lmqg/t5-base-squad-ae). [raw metric file](https://huggingface.co/lmqg/t5-base-squad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.lmqg_t5-base-squad-ae.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 92.75 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedF1Score (MoverScore) | 64.36 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (BERTScore) | 92.59 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (MoverScore) | 64.45 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (BERTScore) | 92.93 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (MoverScore) | 64.35 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metrics (Question Generation, Out-of-Domain)***
| Dataset | Type | BERTScore| Bleu_4 | METEOR | MoverScore | ROUGE_L | Link |
|:--------|:-----|---------:|-------:|-------:|-----------:|--------:|-----:|
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | amazon | 90.75 | 6.57 | 22.37 | 60.8 | 24.81 | [link](https://huggingface.co/lmqg/t5-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.amazon.json) |
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | new_wiki | 93.02 | 11.09 | 27.23 | 65.97 | 29.59 | [link](https://huggingface.co/lmqg/t5-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.new_wiki.json) |
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | nyt | 92.2 | 7.77 | 25.16 | 63.83 | 24.56 | [link](https://huggingface.co/lmqg/t5-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.nyt.json) |
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | reddit | 90.59 | 5.68 | 21.3 | 60.23 | 21.96 | [link](https://huggingface.co/lmqg/t5-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.reddit.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | books | 88.14 | 0.49 | 13.51 | 55.65 | 9.44 | [link](https://huggingface.co/lmqg/t5-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.books.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | electronics | 87.71 | 0.0 | 16.53 | 55.77 | 13.48 | [link](https://huggingface.co/lmqg/t5-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.electronics.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | grocery | 87.46 | 0.0 | 16.24 | 56.59 | 10.26 | [link](https://huggingface.co/lmqg/t5-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.grocery.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | movies | 87.66 | 0.72 | 13.06 | 55.45 | 11.89 | [link](https://huggingface.co/lmqg/t5-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.movies.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | restaurants | 87.83 | 0.0 | 13.3 | 55.45 | 10.7 | [link](https://huggingface.co/lmqg/t5-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.restaurants.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | tripadvisor | 89.23 | 0.93 | 16.51 | 56.67 | 13.51 | [link](https://huggingface.co/lmqg/t5-base-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.tripadvisor.json) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: ['qg']
- model: t5-base
- max_length: 512
- max_length_output: 32
- epoch: 5
- batch: 16
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-base-squad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 1"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 2"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> .", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "lmqg/t5-base-squad-qg", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 26.13, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 53.33, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 26.97, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.6, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 64.74, "name": "MoverScore (Question Generation)"}, {"type": "qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer", "value": 95.42, "name": "QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer", "value": 95.37, "name": "QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer", "value": 95.48, "name": "QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer", "value": 70.63, "name": "QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer", "value": 70.34, "name": "QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer", "value": 70.92, "name": "QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer", "value": 92.75, "name": "QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_recall_bertscore_question_answer_generation_gold_answer", "value": 92.93, "name": "QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_precision_bertscore_question_answer_generation_gold_answer", "value": 92.59, "name": "QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer", "value": 64.36, "name": "QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_recall_moverscore_question_answer_generation_gold_answer", "value": 64.35, "name": "QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_precision_moverscore_question_answer_generation_gold_answer", "value": 64.45, "name": "QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer]"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "amazon", "args": "amazon"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.06566094160179252, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.24807913266651793, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.22371955880948402, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.9075296597429775, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.6080134772590127, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "new_wiki", "args": "new_wiki"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.11090197883325803, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.2958807755982971, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.2723283879163309, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.9301888817677253, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.6596737223946099, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "nyt", "args": "nyt"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.07770444680489934, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.24562552942523097, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.2516102599911737, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.9220106686608106, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.638293725604755, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "reddit", "args": "reddit"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.05681866334465563, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.21961287790760073, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.2129793223231344, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.9058513802527968, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.6023495282031547, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "books", "args": "books"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.004910619965406665, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.09444487769816154, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.13509168014623008, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8813527884907747, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5564529629929519, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "electronics", "args": "electronics"}, "metrics": [{"type": "bleu4_question_generation", "value": 1.1509235130252845e-06, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.1347921519214348, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.1652654590718401, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8771152388648826, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5576801864538657, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "grocery", "args": "grocery"}, "metrics": [{"type": "bleu4_question_generation", "value": 9.978299614007137e-11, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.10263878605233773, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.16240054544628837, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8745810793240865, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5658686637551452, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "movies", "args": "movies"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.007215098899309626, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.118923829807047, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.13060353590956533, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8766350997732831, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5545418638672879, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "restaurants", "args": "restaurants"}, "metrics": [{"type": "bleu4_question_generation", "value": 1.7093216558055103e-10, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.10704045187993966, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.13299758428004418, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8783149416832363, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5544508204843501, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "tripadvisor", "args": "tripadvisor"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.009344978745987451, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.13512247796303523, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.16514085804298576, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8923153428327643, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5667192018951045, "name": "MoverScore (Question Generation)"}]}]}]} | lmqg/t5-base-squad-qg | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `research-backup/t5-large-squad-qg-default`
This model is fine-tuned version of [t5-large](https://huggingface.co/t5-large) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without parameter search (default configuration is taken from [ERNIE-GEN](https://arxiv.org/abs/2001.11314)).
### Overview
- **Language model:** [t5-large](https://huggingface.co/t5-large)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/t5-large-squad-qg-default")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/t5-large-squad-qg-default")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-large-squad-qg-default/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.92 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 59.39 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 43.58 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 33.91 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 27.03 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 27.71 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 65.21 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 53.98 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: ['qg']
- model: t5-large
- max_length: 512
- max_length_output: 32
- epoch: 10
- batch: 1
- lr: 1.25e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 32
- label_smoothing: 0.1
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-large-squad-qg-default/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 1"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 2"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> .", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "research-backup/t5-large-squad-qg-default", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 27.03, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 53.98, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 27.71, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.92, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 65.21, "name": "MoverScore (Question Generation)"}]}]}]} | research-backup/t5-large-squad-qg-default | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2001.11314",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `research-backup/t5-large-squad-qg-no-answer`
This model is fine-tuned version of [t5-large](https://huggingface.co/t5-large) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without answer information, i.e. generate a question only given a paragraph (note that normal model is fine-tuned to generate a question given a pargraph and an associated answer in the paragraph).
### Overview
- **Language model:** [t5-large](https://huggingface.co/t5-large)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/t5-large-squad-qg-no-answer")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/t5-large-squad-qg-no-answer")
output = pipe("generate question: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl>")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-large-squad-qg-no-answer/raw/main/eval/metric.first.sentence.paragraph_sentence.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.41 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 56.44 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 40.29 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 30.87 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 24.27 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 25.67 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 63.97 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 51.3 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_sentence']
- output_types: ['question']
- prefix_types: ['qg']
- model: t5-large
- max_length: 512
- max_length_output: 32
- epoch: 7
- batch: 16
- lr: 5e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-large-squad-qg-no-answer/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "generate question: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl>", "example_title": "Question Generation Example 1"}, {"text": "generate question: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl>", "example_title": "Question Generation Example 2"}, {"text": "generate question: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records . <hl>", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "research-backup/t5-large-squad-qg-no-answer", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 24.27, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 51.3, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 25.67, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.41, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 63.97, "name": "MoverScore (Question Generation)"}]}]}]} | research-backup/t5-large-squad-qg-no-answer | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `research-backup/t5-large-squad-qg-no-paragraph`
This model is fine-tuned version of [t5-large](https://huggingface.co/t5-large) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without pargraph information but only the sentence that contains the answer.
### Overview
- **Language model:** [t5-large](https://huggingface.co/t5-large)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/t5-large-squad-qg-no-paragraph")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/t5-large-squad-qg-no-paragraph")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-large-squad-qg-no-paragraph/raw/main/eval/metric.first.sentence.sentence_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.88 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 57.49 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 41.59 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 32.1 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 25.36 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 26.28 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 64.44 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 52.53 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['sentence_answer']
- output_types: ['question']
- prefix_types: ['qg']
- model: t5-large
- max_length: 128
- max_length_output: 32
- epoch: 6
- batch: 16
- lr: 5e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-large-squad-qg-no-paragraph/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 1"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 2"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> .", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "research-backup/t5-large-squad-qg-no-paragraph", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 25.36, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 52.53, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 26.28, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.88, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 64.44, "name": "MoverScore (Question Generation)"}]}]}]} | research-backup/t5-large-squad-qg-no-paragraph | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `lmqg/t5-large-squad-qg`
This model is fine-tuned version of [t5-large](https://huggingface.co/t5-large) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [t5-large](https://huggingface.co/t5-large)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/t5-large-squad-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/t5-large-squad-qg")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 91 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 59.54 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 43.79 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 34.14 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 27.21 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 27.7 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 65.29 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 54.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 95.57 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedF1Score (MoverScore) | 71.1 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (BERTScore) | 95.62 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (MoverScore) | 71.41 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (BERTScore) | 95.51 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (MoverScore) | 70.8 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/t5-large-squad-ae`](https://huggingface.co/lmqg/t5-large-squad-ae). [raw metric file](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.lmqg_t5-large-squad-ae.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 92.97 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedF1Score (MoverScore) | 64.72 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (BERTScore) | 92.83 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (MoverScore) | 64.87 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (BERTScore) | 93.14 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (MoverScore) | 64.66 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metrics (Question Generation, Out-of-Domain)***
| Dataset | Type | BERTScore| Bleu_4 | METEOR | MoverScore | ROUGE_L | Link |
|:--------|:-----|---------:|-------:|-------:|-----------:|--------:|-----:|
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | amazon | 91.15 | 6.9 | 23.01 | 61.22 | 25.34 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.amazon.json) |
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | new_wiki | 93.17 | 11.18 | 27.92 | 66.31 | 30.06 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.new_wiki.json) |
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | nyt | 92.42 | 8.05 | 25.67 | 64.37 | 25.19 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.nyt.json) |
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | reddit | 90.95 | 5.95 | 21.85 | 60.64 | 21.99 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.reddit.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | books | 87.94 | 0.0 | 11.97 | 55.48 | 9.87 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.books.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | electronics | 87.86 | 0.84 | 16.16 | 56.05 | 14.13 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.electronics.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | grocery | 87.5 | 0.76 | 15.4 | 56.76 | 10.5 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.grocery.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | movies | 87.34 | 0.0 | 13.03 | 55.36 | 12.27 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.movies.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | restaurants | 88.25 | 0.0 | 12.45 | 55.91 | 11.93 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.restaurants.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | tripadvisor | 89.29 | 0.78 | 16.3 | 56.81 | 14.59 | [link](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.tripadvisor.json) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: ['qg']
- model: t5-large
- max_length: 512
- max_length_output: 32
- epoch: 6
- batch: 16
- lr: 5e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-large-squad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 1"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 2"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> .", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "lmqg/t5-large-squad-qg", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 27.21, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 54.13, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 27.7, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 91.0, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 65.29, "name": "MoverScore (Question Generation)"}, {"type": "qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer", "value": 95.57, "name": "QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer", "value": 95.51, "name": "QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer", "value": 95.62, "name": "QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer", "value": 71.1, "name": "QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer", "value": 70.8, "name": "QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer", "value": 71.41, "name": "QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer", "value": 92.97, "name": "QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_recall_bertscore_question_answer_generation_gold_answer", "value": 93.14, "name": "QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_precision_bertscore_question_answer_generation_gold_answer", "value": 92.83, "name": "QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer", "value": 64.72, "name": "QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_recall_moverscore_question_answer_generation_gold_answer", "value": 64.66, "name": "QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_precision_moverscore_question_answer_generation_gold_answer", "value": 64.87, "name": "QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer]"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "amazon", "args": "amazon"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.06900290231938097, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.2533914694448162, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.23008771718972076, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.911505327721968, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.6121573406359604, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "new_wiki", "args": "new_wiki"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.11180552552578073, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.30058260713604856, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.2792115028015132, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.9316688723462665, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.6630609588403827, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "nyt", "args": "nyt"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.08047293820182351, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.2518886524420378, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.2567360224537303, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.9241819763475975, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.6437327703980464, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "reddit", "args": "reddit"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.059479733408388684, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.21988765767997162, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.21853957131436155, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.909493447578926, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.6064107011094938, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "books", "args": "books"}, "metrics": [{"type": "bleu4_question_generation", "value": 8.038380813854933e-07, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.09871887977864714, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.11967515095282454, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.879356137120911, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5548471413251269, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "electronics", "args": "electronics"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.008434036066953862, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.14134333081097744, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.1616192221446712, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8786280911509731, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.560488065035827, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "grocery", "args": "grocery"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.007639835274564104, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.105046370156132, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.1540402363682146, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8749810194969178, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.56763136192963, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "movies", "args": "movies"}, "metrics": [{"type": "bleu4_question_generation", "value": 1.149076256883913e-06, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.12272623105315689, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.13027427314652157, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8733754583767482, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5536261740282519, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "restaurants", "args": "restaurants"}, "metrics": [{"type": "bleu4_question_generation", "value": 1.8508536550762953e-10, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.1192666899417942, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.12447769563902232, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8825407926650608, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5591163692270524, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "tripadvisor", "args": "tripadvisor"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.007817275411070228, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.14594416096461188, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.16297700667338805, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8928685000227912, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5681021918513103, "name": "MoverScore (Question Generation)"}]}]}]} | lmqg/t5-large-squad-qg | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `research-backup/t5-small-squad-qg-default`
This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without parameter search (default configuration is taken from [ERNIE-GEN](https://arxiv.org/abs/2001.11314)).
### Overview
- **Language model:** [t5-small](https://huggingface.co/t5-small)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/t5-small-squad-qg-default")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/t5-small-squad-qg-default")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-small-squad-qg-default/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.17 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 54.85 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 38.46 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 29.07 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 22.67 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 24.68 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 63.06 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 49.54 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: ['qg']
- model: t5-small
- max_length: 512
- max_length_output: 32
- epoch: 10
- batch: 32
- lr: 1.25e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 1
- label_smoothing: 0.1
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-small-squad-qg-default/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 1"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 2"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> .", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "research-backup/t5-small-squad-qg-default", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 22.67, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 49.54, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 24.68, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.17, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 63.06, "name": "MoverScore (Question Generation)"}]}]}]} | research-backup/t5-small-squad-qg-default | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2001.11314",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `lmqg/t5-small-squad-qg-ae`
This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question generation and answer extraction jointly on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [t5-small](https://huggingface.co/t5-small)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/t5-small-squad-qg-ae")
# model prediction
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/t5-small-squad-qg-ae")
# answer extraction
answer = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
# question generation
question = pipe("extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-small-squad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.18 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 56.54 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 40.31 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 30.8 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 24.18 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 25.58 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 63.72 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 51.12 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-small-squad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 91.74 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedF1Score (MoverScore) | 63.23 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (BERTScore) | 91.49 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (MoverScore) | 63.26 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (BERTScore) | 92.01 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (MoverScore) | 63.29 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/t5-small-squad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:---------------------------------------------------------------|
| AnswerExactMatch | 54.17 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| AnswerF1Score | 66.92 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| BERTScore | 90.77 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 40.81 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 35.84 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 31.06 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 27.06 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 40.9 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 79.49 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 66.52 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_answer', 'paragraph_sentence']
- output_types: ['question', 'answer']
- prefix_types: ['qg', 'ae']
- model: t5-small
- max_length: 512
- max_length_output: 32
- epoch: 7
- batch: 64
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 1
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-small-squad-qg-ae/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation", "answer extraction"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 1"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 2"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> .", "example_title": "Question Generation Example 3"}, {"text": "extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.", "example_title": "Answer Extraction Example 1"}, {"text": "extract answers: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress. <hl>", "example_title": "Answer Extraction Example 2"}], "model-index": [{"name": "lmqg/t5-small-squad-qg-ae", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 24.18, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 51.12, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 25.58, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.18, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 63.72, "name": "MoverScore (Question Generation)"}, {"type": "qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer", "value": 91.74, "name": "QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer))"}, {"type": "qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer", "value": 92.01, "name": "QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer))"}, {"type": "qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer", "value": 91.49, "name": "QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer))"}, {"type": "qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer", "value": 63.23, "name": "QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer))"}, {"type": "qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer", "value": 63.29, "name": "QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer))"}, {"type": "qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer", "value": 63.26, "name": "QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer))"}, {"type": "bleu4_answer_extraction", "value": 27.06, "name": "BLEU4 (Answer Extraction)"}, {"type": "rouge_l_answer_extraction", "value": 66.52, "name": "ROUGE-L (Answer Extraction)"}, {"type": "meteor_answer_extraction", "value": 40.9, "name": "METEOR (Answer Extraction)"}, {"type": "bertscore_answer_extraction", "value": 90.77, "name": "BERTScore (Answer Extraction)"}, {"type": "moverscore_answer_extraction", "value": 79.49, "name": "MoverScore (Answer Extraction)"}, {"type": "answer_f1_score__answer_extraction", "value": 66.92, "name": "AnswerF1Score (Answer Extraction)"}, {"type": "answer_exact_match_answer_extraction", "value": 54.17, "name": "AnswerExactMatch (Answer Extraction)"}]}]}]} | lmqg/t5-small-squad-qg-ae | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question generation",
"answer extraction",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `research-backup/t5-small-squad-qg-no-answer`
This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without answer information, i.e. generate a question only given a paragraph (note that normal model is fine-tuned to generate a question given a pargraph and an associated answer in the paragraph).
### Overview
- **Language model:** [t5-small](https://huggingface.co/t5-small)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/t5-small-squad-qg-no-answer")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/t5-small-squad-qg-no-answer")
output = pipe("generate question: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl>")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-small-squad-qg-no-answer/raw/main/eval/metric.first.sentence.paragraph_sentence.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 89.64 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 53.37 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 36.67 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 27.4 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 21.12 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 23.38 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 62.07 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 47.47 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_sentence']
- output_types: ['question']
- prefix_types: ['qg']
- model: t5-small
- max_length: 512
- max_length_output: 32
- epoch: 7
- batch: 64
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-small-squad-qg-no-answer/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "generate question: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl>", "example_title": "Question Generation Example 1"}, {"text": "generate question: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl>", "example_title": "Question Generation Example 2"}, {"text": "generate question: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records . <hl>", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "research-backup/t5-small-squad-qg-no-answer", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 21.12, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 47.47, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 23.38, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 89.64, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 62.07, "name": "MoverScore (Question Generation)"}]}]}]} | research-backup/t5-small-squad-qg-no-answer | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `research-backup/t5-small-squad-qg-no-paragraph`
This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without pargraph information but only the sentence that contains the answer.
### Overview
- **Language model:** [t5-small](https://huggingface.co/t5-small)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/t5-small-squad-qg-no-paragraph")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/t5-small-squad-qg-no-paragraph")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-small-squad-qg-no-paragraph/raw/main/eval/metric.first.sentence.sentence_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.36 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 55.39 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 39.1 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 29.7 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 23.23 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 24.8 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 63.18 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 50.18 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['sentence_answer']
- output_types: ['question']
- prefix_types: ['qg']
- model: t5-small
- max_length: 128
- max_length_output: 32
- epoch: 8
- batch: 64
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 1
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-small-squad-qg-no-paragraph/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 1"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 2"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> .", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "research-backup/t5-small-squad-qg-no-paragraph", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 23.23, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 50.18, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 24.8, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.36, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 63.18, "name": "MoverScore (Question Generation)"}]}]}]} | research-backup/t5-small-squad-qg-no-paragraph | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Card of `lmqg/t5-small-squad-qg`
This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [t5-small](https://huggingface.co/t5-small)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/t5-small-squad-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/t5-small-squad-qg")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.2 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 56.86 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 40.59 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 31.05 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 24.4 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 25.84 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 63.89 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 51.43 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 95.14 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedF1Score (MoverScore) | 69.79 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (BERTScore) | 95.19 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (MoverScore) | 70.09 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (BERTScore) | 95.09 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (MoverScore) | 69.51 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/t5-small-squad-ae`](https://huggingface.co/lmqg/t5-small-squad-ae). [raw metric file](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.lmqg_t5-small-squad-ae.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 92.26 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedF1Score (MoverScore) | 63.83 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (BERTScore) | 92.07 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (MoverScore) | 63.92 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (BERTScore) | 92.48 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (MoverScore) | 63.82 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metrics (Question Generation, Out-of-Domain)***
| Dataset | Type | BERTScore| Bleu_4 | METEOR | MoverScore | ROUGE_L | Link |
|:--------|:-----|---------:|-------:|-------:|-----------:|--------:|-----:|
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | amazon | 89.94 | 5.45 | 20.75 | 59.79 | 22.97 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.amazon.json) |
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | new_wiki | 92.61 | 10.48 | 26.21 | 65.05 | 28.11 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.new_wiki.json) |
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | nyt | 91.71 | 6.97 | 23.66 | 62.86 | 23.03 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.nyt.json) |
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | reddit | 89.57 | 4.75 | 19.8 | 59.23 | 20.1 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.reddit.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | books | 87.4 | 0.0 | 12.3 | 55.34 | 10.88 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.books.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | electronics | 87.12 | 1.16 | 15.49 | 55.55 | 15.62 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.electronics.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | grocery | 87.22 | 0.52 | 14.95 | 57.12 | 12.63 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.grocery.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | movies | 86.84 | 0.0 | 12.11 | 55.01 | 12.63 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.movies.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | restaurants | 87.49 | 0.0 | 12.67 | 55.04 | 11.53 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.restaurants.json) |
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | tripadvisor | 88.4 | 1.46 | 15.53 | 55.91 | 14.24 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.tripadvisor.json) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: ['qg']
- model: t5-small
- max_length: 512
- max_length_output: 32
- epoch: 9
- batch: 64
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 1
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| {"language": "en", "license": "cc-by-4.0", "tags": ["question generation"], "datasets": ["lmqg/qg_squad"], "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "widget": [{"text": "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 1"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records.", "example_title": "Question Generation Example 2"}, {"text": "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> .", "example_title": "Question Generation Example 3"}], "model-index": [{"name": "lmqg/t5-small-squad-qg", "results": [{"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squad", "type": "default", "args": "default"}, "metrics": [{"type": "bleu4_question_generation", "value": 24.4, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 51.43, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 25.84, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 90.2, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 63.89, "name": "MoverScore (Question Generation)"}, {"type": "qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer", "value": 95.14, "name": "QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer", "value": 95.09, "name": "QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer", "value": 95.19, "name": "QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer", "value": 69.79, "name": "QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer", "value": 69.51, "name": "QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer", "value": 70.09, "name": "QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]"}, {"type": "qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer", "value": 92.26, "name": "QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_recall_bertscore_question_answer_generation_gold_answer", "value": 92.48, "name": "QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_precision_bertscore_question_answer_generation_gold_answer", "value": 92.07, "name": "QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer", "value": 63.83, "name": "QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_recall_moverscore_question_answer_generation_gold_answer", "value": 63.82, "name": "QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer]"}, {"type": "qa_aligned_precision_moverscore_question_answer_generation_gold_answer", "value": 63.92, "name": "QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer]"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "amazon", "args": "amazon"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.05446530981230419, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.22970251150837936, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.20750111458026313, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8994468043449728, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5979360752045209, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "new_wiki", "args": "new_wiki"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.104778841878282, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.2810996054026912, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.2620896643265683, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.9260609935106264, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.6505447280842604, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "nyt", "args": "nyt"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.06968574467261796, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.23034544400347773, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.2366281135333324, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.9170723215078939, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.6286133349914554, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_squadshifts", "type": "reddit", "args": "reddit"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.04750005928226048, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.20103251416604878, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.19795765672224766, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8956885570918934, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5923103575686176, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "books", "args": "books"}, "metrics": [{"type": "bleu4_question_generation", "value": 9.484839636219606e-07, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.10882963005711024, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.12295516249732996, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8739685463031549, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5533617434235973, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "electronics", "args": "electronics"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.01163379406564442, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.1561742307706773, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.1548763941617263, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.871218326462417, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.555469199401916, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "grocery", "args": "grocery"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.005200691923654061, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.12630554732425642, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.14946423426295516, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8721985507011414, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5711858634802471, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "movies", "args": "movies"}, "metrics": [{"type": "bleu4_question_generation", "value": 9.928321423080042e-07, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.1263481480649435, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.12111872719101677, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.868397428617849, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5500525496260875, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "restaurants", "args": "restaurants"}, "metrics": [{"type": "bleu4_question_generation", "value": 1.728249026089261e-10, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.11532401921027728, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.12673504956336362, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8748602174660739, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5503550909114101, "name": "MoverScore (Question Generation)"}]}, {"task": {"type": "text2text-generation", "name": "Text2text Generation"}, "dataset": {"name": "lmqg/qg_subjqa", "type": "tripadvisor", "args": "tripadvisor"}, "metrics": [{"type": "bleu4_question_generation", "value": 0.01455898541449453, "name": "BLEU4 (Question Generation)"}, {"type": "rouge_l_question_generation", "value": 0.1424064090212074, "name": "ROUGE-L (Question Generation)"}, {"type": "meteor_question_generation", "value": 0.15534444057817395, "name": "METEOR (Question Generation)"}, {"type": "bertscore_question_generation", "value": 0.8839819959101786, "name": "BERTScore (Question Generation)"}, {"type": "moverscore_question_generation", "value": 0.5591337724792363, "name": "MoverScore (Question Generation)"}]}]}]} | lmqg/t5-small-squad-qg | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | adapter-transformers |
# Adapter `asahi417/tner-roberta-large-multiconer-en-adapter` for roberta-large
An [adapter](https://adapterhub.ml) for the `roberta-large` model that was trained on the [named-entity-recognition/multiconer](https://adapterhub.ml/explore/named-entity-recognition/multiconer/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-large")
adapter_name = model.load_adapter("asahi417/tner-roberta-large-multiconer-en-adapter", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> | {"tags": ["adapter-transformers", "adapterhub:named-entity-recognition/multiconer", "roberta"], "datasets": ["multiconer"]} | asahi417/tner-roberta-large-multiconer-en-adapter | null | [
"adapter-transformers",
"roberta",
"adapterhub:named-entity-recognition/multiconer",
"dataset:multiconer",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers | {} | asahi417/tner-roberta-large-multiconer-en | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
token-classification | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-all-english")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-all-english")
``` | {} | asahi417/tner-xlm-roberta-base-all-english | null | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-bc5cdr")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-bc5cdr")
``` | {} | tner/xlm-roberta-base-bc5cdr | null | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-bionlp2004")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-bionlp2004")
``` | {} | tner/xlm-roberta-base-bionlp2004 | null | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-conll2003")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-conll2003")
``` | {} | tner/xlm-roberta-base-conll2003 | null | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-fin")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-fin")
``` | {} | tner/xlm-roberta-base-fin | null | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers | # Model Card for XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER.
# Model Details
## Model Description
XLM-RoBERTa finetuned on NER.
- **Developed by:** Asahi Ushio
- **Shared by [Optional]:** Hugging Face
- **Model type:** Token Classification
- **Language(s) (NLP):** en
- **License:** More information needed
- **Related Models:** XLM-RoBERTa
- **Parent Model:** XLM-RoBERTa
- **Resources for more information:**
- [GitHub Repo](https://github.com/asahi417/tner)
- [Associated Paper](https://arxiv.org/abs/2209.12616)
- [Space](https://huggingface.co/spaces/akdeniz27/turkish-named-entity-recognition)
# Uses
## Direct Use
Token Classification
## Downstream Use [Optional]
This model can be used in conjunction with the [tner library](https://github.com/asahi417/tner).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.
# Training Details
## Training Data
An NER dataset contains a sequence of tokens and tags for each split (usually `train`/`validation`/`test`),
```python
{
'train': {
'tokens': [
['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.'],
['From', 'Green', 'Newsfeed', ':', 'AHFA', 'extends', 'deadline', 'for', 'Sage', 'Award', 'to', 'Nov', '.', '5', 'http://tinyurl.com/24agj38'], ...
],
'tags': [
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ...
]
},
'validation': ...,
'test': ...,
}
```
with a dictionary to map a label to its index (`label2id`) as below.
```python
{"O": 0, "B-ORG": 1, "B-MISC": 2, "B-PER": 3, "I-PER": 4, "B-LOC": 5, "I-ORG": 6, "I-MISC": 7, "I-LOC": 8}
```
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
**Layer_norm_eps:** 1e-05,
**Num_attention_heads:** 12,
**Num_hidden_layers:** 12,
**Vocab_size:** 250002
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
See [dataset card](https://github.com/asahi417/tner/blob/master/DATASET_CARD.md) for full dataset lists
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
**BibTeX:**
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.eacl-demos.7",
pages = "53--62",
}
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Asahi Ushio in collaboration with Ezi Ozoani and the Hugging Face team.
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-ontonotes5")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-ontonotes5")
```
</details>
| {"language": ["en"]} | asahi417/tner-xlm-roberta-base-ontonotes5 | null | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"en",
"arxiv:2209.12616",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-ar")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-ar")
``` | {} | tner/xlm-roberta-base-panx-dataset-ar | null | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-en")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-en")
``` | {} | tner/xlm-roberta-base-panx-dataset-en | null | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.