Datasets:
Tasks:
Text Generation
Modalities:
Text
Sub-tasks:
language-modeling
Languages:
Spanish
Size:
10K - 100K
ArXiv:
Tags:
question-generation
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,10 +1,10 @@
|
|
1 |
---
|
2 |
license: cc-by-sa-4.0
|
3 |
pretty_name: SQuAD for question generation
|
4 |
-
language:
|
5 |
multilinguality: monolingual
|
6 |
size_categories: 1k<n<10K
|
7 |
-
source_datasets:
|
8 |
task_categories:
|
9 |
- text-generation
|
10 |
task_ids:
|
@@ -13,7 +13,7 @@ tags:
|
|
13 |
- question-generation
|
14 |
---
|
15 |
|
16 |
-
# Dataset Card for "lmqg/
|
17 |
|
18 |
|
19 |
## Dataset Description
|
@@ -22,14 +22,14 @@ tags:
|
|
22 |
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
|
23 |
|
24 |
### Dataset Summary
|
25 |
-
This is the question & answer generation dataset based on the
|
26 |
|
27 |
### Supported Tasks and Leaderboards
|
28 |
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
|
29 |
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
|
30 |
|
31 |
### Languages
|
32 |
-
|
33 |
|
34 |
## Dataset Structure
|
35 |
An example of 'train' looks as follows.
|
|
|
1 |
---
|
2 |
license: cc-by-sa-4.0
|
3 |
pretty_name: SQuAD for question generation
|
4 |
+
language: es
|
5 |
multilinguality: monolingual
|
6 |
size_categories: 1k<n<10K
|
7 |
+
source_datasets: lmqg/qg_esquad
|
8 |
task_categories:
|
9 |
- text-generation
|
10 |
task_ids:
|
|
|
13 |
- question-generation
|
14 |
---
|
15 |
|
16 |
+
# Dataset Card for "lmqg/qag_esquad"
|
17 |
|
18 |
|
19 |
## Dataset Description
|
|
|
22 |
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
|
23 |
|
24 |
### Dataset Summary
|
25 |
+
This is the question & answer generation dataset based on the ESQuAD.
|
26 |
|
27 |
### Supported Tasks and Leaderboards
|
28 |
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
|
29 |
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
|
30 |
|
31 |
### Languages
|
32 |
+
Spanish (es)
|
33 |
|
34 |
## Dataset Structure
|
35 |
An example of 'train' looks as follows.
|