Datasets:

Modalities:
Text
Formats:
json
Sub-tasks:
extractive-qa
Languages:
Catalan
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
8cc5a26
·
1 Parent(s): df5fe39

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,30 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- dev.json filter=lfs diff=lfs merge=lfs -text
29
- test.json filter=lfs diff=lfs merge=lfs -text
30
- train.json filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,203 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - found
6
- language:
7
- - ca
8
- license:
9
- - cc-by-sa-4.0
10
- multilinguality:
11
- - monolingual
12
- pretty_name: ViquiQuAD
13
- size_categories:
14
- - 10K<n<100K
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - question-answering
19
- task_ids:
20
- - extractive-qa
21
- ---
22
-
23
- # ViquiQuAD, An extractive QA dataset for Catalan, from the Wikipedia
24
-
25
- ## Table of Contents
26
- - [Table of Contents](#table-of-contents)
27
- - [Dataset Description](#dataset-description)
28
- - [Dataset Summary](#dataset-summary)
29
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
- - [Languages](#languages)
31
- - [Dataset Structure](#dataset-structure)
32
- - [Data Instances](#data-instances)
33
- - [Data Fields](#data-fields)
34
- - [Data Splits](#data-splits)
35
- - [Dataset Creation](#dataset-creation)
36
- - [Curation Rationale](#curation-rationale)
37
- - [Source Data](#source-data)
38
- - [Annotations](#annotations)
39
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
- - [Considerations for Using the Data](#considerations-for-using-the-data)
41
- - [Social Impact of Dataset](#social-impact-of-dataset)
42
- - [Discussion of Biases](#discussion-of-biases)
43
- - [Other Known Limitations](#other-known-limitations)
44
- - [Additional Information](#additional-information)
45
- - [Dataset Curators](#dataset-curators)
46
- - [Licensing Information](#licensing-information)
47
- - [Citation Information](#citation-information)
48
- - [Contributions](#contributions)
49
-
50
- ## Dataset Description
51
-
52
- - **Homepage:** https://zenodo.org/record/4562345#.YK41aqGxWUk
53
- - **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
54
- - **Point of Contact:** [Carlos Rodríguez-Penagos](mailto:[email protected]) and [Carme Armentano-Oller](mailto:[email protected])
55
-
56
- ### Dataset Summary
57
-
58
- ViquiQuAD, An extractive QA dataset for Catalan, from the Wikipedia.
59
-
60
- This dataset contains 3111 contexts extracted from a set of 597 high quality original (no translations) articles in the Catalan Wikipedia "[Viquipèdia](https://ca.wikipedia.org/wiki/Portada)", and 1 to 5 questions with their answer for each fragment.
61
-
62
- Viquipedia articles are used under [CC-by-sa](https://creativecommons.org/licenses/by-sa/3.0/legalcode) licence.
63
-
64
- This dataset can be used to fine-tune and evaluate extractive-QA and Language Models.
65
-
66
- ### Supported Tasks and Leaderboards
67
-
68
- Extractive-QA, Language Model
69
-
70
- ### Languages
71
-
72
- The dataset is in Catalan (`ca-CA`).
73
-
74
- ## Dataset Structure
75
-
76
- ### Data Instances
77
-
78
- ```
79
- {
80
- 'id': 'P_66_C_391_Q1',
81
- 'title': 'Xavier Miserachs i Ribalta',
82
- 'context': "En aquesta època es va consolidar el concepte modern del reportatge fotogràfic, diferenciat del fotoperiodisme[n. 2] i de la fotografia documental,[n. 3] pel que fa a l'abast i el concepte. El reportatge fotogràfic implica més la idea de relat: un treball que vol més dedicació de temps, un esforç d'interpretació d'una situació i que culmina en un conjunt d'imatges. Això implica, d'una banda, la reivindicació del fotògraf per opinar, fet que li atorgarà estatus d'autor; l'autor proposa, doncs, una interpretació pròpia de la realitat. D'altra banda, el consens que s'estableix entre la majoria de fotògrafs és que el vehicle natural de la imatge fotogràfica és la pàgina impresa. Això suposà que revistes com Life, Paris-Match, Stern o Época assolissin la màxima esplendor en aquest període.",
83
- 'question': 'De què es diferenciava el reportatge fotogràfic?',
84
- 'answers': [{
85
- 'text': 'del fotoperiodisme[n. 2] i de la fotografia documental',
86
- 'answer_start': 92
87
- }]
88
- }
89
- ```
90
-
91
- ### Data Fields
92
-
93
- Follows [Rajpurkar, Pranav et al. (2016)](http://arxiv.org/abs/1606.05250) for SQuAD v1 datasets.
94
-
95
- - `id` (str): Unique ID assigned to the question.
96
- - `title` (str): Title of the Wikipedia article.
97
- - `context` (str): Wikipedia section text.
98
- - `question` (str): Question.
99
- - `answers` (list): List of answers to the question, each containing:
100
- - `text` (str): Span text answering to the question.
101
- - `answer_start` Starting offset of the span text answering to the question.
102
-
103
- ### Data Splits
104
-
105
- - train: 11259 examples
106
- - developement: 1493 examples
107
- - test: 1428 examples
108
-
109
-
110
- ## Dataset Creation
111
-
112
- ### Curation Rationale
113
-
114
- We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
115
-
116
- ### Source Data
117
-
118
- - [Catalan Wikipedia](https://ca.wikipedia.org)
119
-
120
- #### Initial Data Collection and Normalization
121
-
122
- The source data are scraped articles from the [Catalan wikipedia](https://ca.wikipedia.org) site.
123
-
124
- From a set of high quality, non-translation, articles in the Catalan Wikipedia, 597 were randomly chosen, and from them 3111, 5-8 sentence contexts were extracted. We commissioned creation of between 1 and 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 ([Rajpurkar, Pranav et al. (2016)](http://arxiv.org/abs/1606.05250)). In total, 15153 pairs of a question and an extracted fragment that contains the answer were created.
125
-
126
- For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
127
-
128
- #### Who are the source language producers?
129
-
130
- Volunteers who collaborate with Catalan Wikipedia.
131
-
132
- ### Annotations
133
-
134
- #### Annotation process
135
-
136
- We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 ([Rajpurkar, Pranav et al. (2016)](http://arxiv.org/abs/1606.05250)).
137
-
138
- #### Who are the annotators?
139
-
140
- Annotation was commissioned to an specialized company that hired a team of native language speakers.
141
-
142
- ### Personal and Sensitive Information
143
-
144
- No personal or sensitive information included.
145
-
146
- ## Considerations for Using the Data
147
-
148
- ### Social Impact of Dataset
149
-
150
- We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
151
-
152
- ### Discussion of Biases
153
-
154
- [N/A]
155
-
156
- ### Other Known Limitations
157
-
158
- [N/A]
159
-
160
- ## Additional Information
161
-
162
- ### Dataset Curators
163
-
164
- Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])
165
-
166
- This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
167
-
168
-
169
- ### Licensing Information
170
-
171
- This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
172
-
173
- ### Citation Information
174
-
175
- ```
176
- @inproceedings{armengol-estape-etal-2021-multilingual,
177
- title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
178
- author = "Armengol-Estap{\'e}, Jordi and
179
- Carrino, Casimiro Pio and
180
- Rodriguez-Penagos, Carlos and
181
- de Gibert Bonet, Ona and
182
- Armentano-Oller, Carme and
183
- Gonzalez-Agirre, Aitor and
184
- Melero, Maite and
185
- Villegas, Marta",
186
- booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
187
- month = aug,
188
- year = "2021",
189
- address = "Online",
190
- publisher = "Association for Computational Linguistics",
191
- url = "https://aclanthology.org/2021.findings-acl.437",
192
- doi = "10.18653/v1/2021.findings-acl.437",
193
- pages = "4933--4946",
194
- }
195
- ```
196
-
197
-
198
- [DOI](https://doi.org/10.5281/zenodo.4562344)
199
-
200
-
201
- ### Contributions
202
-
203
- [N/A]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dev.json → default/viquiquad-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3281866ddea09816d1bf1aba1adbbf544c5877f171ec6d76cebeb222575d10e3
3
- size 550588
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f7fdf5fb6b66b47d39aab6ffabf3c5bf320aab263b97b638aa10694eb2221d8
3
+ size 269471
train.json → default/viquiquad-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ee9b705b4af4f4b3c0cf785ce94de6ab0dcbd15f765236224b18328213393cf3
3
- size 4114735
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea999f844e175891803e339b67a63daccacbc4a920313bf5010fc570e7580153
3
+ size 2072017
test.json → default/viquiquad-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f048ccc2c744737e1c14145280133f1f363eb0508717c1a8688f6e926ecea50d
3
- size 518015
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e886687c21d5c85ee3819eebb051a8dea14285a8a78762840479fc721825df5
3
+ size 283579
viquiquad.py DELETED
@@ -1,102 +0,0 @@
1
- """ViquiQuAD Dataset."""
2
- # Loading script for the ViquiQuAD dataset.
3
- import json
4
-
5
- import datasets
6
-
7
- logger = datasets.logging.get_logger(__name__)
8
-
9
- _CITATION = """\
10
- Rodriguez-Penagos, Carlos Gerardo, & Armentano-Oller, Carme. (2021).
11
- ViquiQuAD: an extractive QA dataset from Catalan Wikipedia (Version ViquiQuad_v.1.0.1)
12
- [Data set]. Zenodo. http://doi.org/10.5281/zenodo.4761412
13
- """
14
-
15
- _DESCRIPTION = """\
16
- ViquiQuAD: an extractive QA dataset from Catalan Wikipedia.
17
- This dataset contains 3111 contexts extracted from a set of 597 high quality original (no translations)
18
- articles in the Catalan Wikipedia "Viquipèdia" (ca.wikipedia.org), and 1 to 5 questions with their
19
- answer for each fragment. Viquipedia articles are used under CC-by-sa licence.
20
- This dataset can be used to build extractive-QA and Language Models.
21
- Funded by the Generalitat de Catalunya, Departament de Polítiques Digitals i Administració Pública (AINA),
22
- MT4ALL and Plan de Impulso de las Tecnologías del Lenguaje (Plan TL).
23
- """
24
-
25
- _HOMEPAGE = "https://zenodo.org/record/4562345#.YK41aqGxWUk"
26
-
27
- _URL = "https://huggingface.co/datasets/projecte-aina/viquiquad/resolve/main/"
28
- _TRAINING_FILE = "train.json"
29
- _DEV_FILE = "dev.json"
30
- _TEST_FILE = "test.json"
31
-
32
-
33
- class ViquiQuAD(datasets.GeneratorBasedBuilder):
34
- """ViquiQuAD Dataset."""
35
-
36
- VERSION = datasets.Version("1.0.1")
37
-
38
- def _info(self):
39
- return datasets.DatasetInfo(
40
- description=_DESCRIPTION,
41
- features=datasets.Features(
42
- {
43
- "id": datasets.Value("string"),
44
- "title": datasets.Value("string"),
45
- "context": datasets.Value("string"),
46
- "question": datasets.Value("string"),
47
- "answers": [
48
- {
49
- "text": datasets.Value("string"),
50
- "answer_start": datasets.Value("int32"),
51
- }
52
- ],
53
- }
54
- ),
55
- # No default supervised_keys (as we have to pass both question
56
- # and context as input).
57
- supervised_keys=None,
58
- homepage=_HOMEPAGE,
59
- citation=_CITATION,
60
- )
61
-
62
- def _split_generators(self, dl_manager):
63
- """Returns SplitGenerators."""
64
- urls_to_download = {
65
- "train": f"{_URL}{_TRAINING_FILE}",
66
- "dev": f"{_URL}{_DEV_FILE}",
67
- "test": f"{_URL}{_TEST_FILE}",
68
- }
69
- downloaded_files = dl_manager.download(urls_to_download)
70
-
71
- return [
72
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
73
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
74
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
75
- ]
76
-
77
- def _generate_examples(self, filepath):
78
- """This function returns the examples in the raw (text) form."""
79
- logger.info("generating examples from = %s", filepath)
80
- with open(filepath, encoding="utf-8") as f:
81
- viquiquad = json.load(f)
82
- for article in viquiquad["data"]:
83
- title = article.get("title", "").strip()
84
- for paragraph in article["paragraphs"]:
85
- context = paragraph["context"].strip()
86
- for qa in paragraph["qas"]:
87
- question = qa["question"].strip()
88
- id_ = qa["id"]
89
- # answer_starts = [answer["answer_start"] for answer in qa["answers"]]
90
- # answers = [answer["text"].strip() for answer in qa["answers"]]
91
- text = qa["answers"][0]["text"]
92
- answer_start = qa["answers"][0]["answer_start"]
93
-
94
- # Features currently used are "context", "question", and "answers".
95
- # Others are extracted here for the ease of future expansions.
96
- yield id_, {
97
- "title": title,
98
- "context": context,
99
- "question": question,
100
- "id": id_,
101
- "answers": [{"text": text, "answer_start": answer_start}],
102
- }