Datasets:
Tasks:
Text Generation
Sub-tasks:
language-modeling
Languages:
Italian
ArXiv:
Tags:
question-generation
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
---
|
2 |
license: cc-by-sa-4.0
|
3 |
pretty_name: SQuAD for question generation
|
4 |
-
language:
|
5 |
multilinguality: monolingual
|
6 |
size_categories: 1k<n<10K
|
7 |
source_datasets: lmqg/qg_itquad
|
@@ -29,7 +29,7 @@ This is the question & answer generation dataset based on the ITQuAD.
|
|
29 |
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
|
30 |
|
31 |
### Languages
|
32 |
-
|
33 |
|
34 |
## Dataset Structure
|
35 |
An example of 'train' looks as follows.
|
@@ -51,7 +51,7 @@ The data fields are the same among all splits.
|
|
51 |
|
52 |
|train|validation|test |
|
53 |
|----:|---------:|----:|
|
54 |
-
|
|
55 |
|
56 |
|
57 |
## Citation Information
|
|
|
1 |
---
|
2 |
license: cc-by-sa-4.0
|
3 |
pretty_name: SQuAD for question generation
|
4 |
+
language: it
|
5 |
multilinguality: monolingual
|
6 |
size_categories: 1k<n<10K
|
7 |
source_datasets: lmqg/qg_itquad
|
|
|
29 |
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
|
30 |
|
31 |
### Languages
|
32 |
+
Itallian (it)
|
33 |
|
34 |
## Dataset Structure
|
35 |
An example of 'train' looks as follows.
|
|
|
51 |
|
52 |
|train|validation|test |
|
53 |
|----:|---------:|----:|
|
54 |
+
|16918 | 6280 | 1988|
|
55 |
|
56 |
|
57 |
## Citation Information
|