Datasets:
dainis-boumber
commited on
Commit
•
5759e63
1
Parent(s):
21bb89e
Upload gdds.py
Browse files
gdds.py
CHANGED
@@ -1,492 +1,8 @@
|
|
1 |
-
import json
|
2 |
-
import os
|
3 |
-
|
4 |
-
import datasets
|
5 |
-
|
6 |
-
|
7 |
-
_SUPER_GLUE_CITATION = """\
|
8 |
-
"""
|
9 |
-
|
10 |
-
_GLUE_DESCRIPTION = """\
|
11 |
-
SuperGLUE (https://super.gluebenchmark.com/) is a new benchmark styled after
|
12 |
-
GLUE with a new set of more difficult language understanding tasks, improved
|
13 |
-
resources, and a new public leaderboard.
|
14 |
-
"""
|
15 |
-
|
16 |
-
_BOOLQ_DESCRIPTION = """\
|
17 |
-
BoolQ (Boolean Questions, Clark et al., 2019a) is a QA task where each example consists of a short
|
18 |
-
passage and a yes/no question about the passage. The questions are provided anonymously and
|
19 |
-
unsolicited by users of the Google search engine, and afterwards paired with a paragraph from a
|
20 |
-
Wikipedia article containing the answer. Following the original work, we evaluate with accuracy."""
|
21 |
-
|
22 |
-
_CB_DESCRIPTION = """\
|
23 |
-
The CommitmentBank (De Marneffe et al., 2019) is a corpus of short texts in which at least
|
24 |
-
one sentence contains an embedded clause. Each of these embedded clauses is annotated with the
|
25 |
-
degree to which we expect that the person who wrote the text is committed to the truth of the clause.
|
26 |
-
The resulting task framed as three-class textual entailment on examples that are drawn from the Wall
|
27 |
-
Street Journal, fiction from the British National Corpus, and Switchboard. Each example consists
|
28 |
-
of a premise containing an embedded clause and the corresponding hypothesis is the extraction of
|
29 |
-
that clause. We use a subset of the data that had inter-annotator agreement above 0.85. The data is
|
30 |
-
imbalanced (relatively fewer neutral examples), so we evaluate using accuracy and F1, where for
|
31 |
-
multi-class F1 we compute the unweighted average of the F1 per class."""
|
32 |
-
|
33 |
-
_COPA_DESCRIPTION = """\
|
34 |
-
The Choice Of Plausible Alternatives (COPA, Roemmele et al., 2011) dataset is a causal
|
35 |
-
reasoning task in which a system is given a premise sentence and two possible alternatives. The
|
36 |
-
system must choose the alternative which has the more plausible causal relationship with the premise.
|
37 |
-
The method used for the construction of the alternatives ensures that the task requires causal reasoning
|
38 |
-
to solve. Examples either deal with alternative possible causes or alternative possible effects of the
|
39 |
-
premise sentence, accompanied by a simple question disambiguating between the two instance
|
40 |
-
types for the model. All examples are handcrafted and focus on topics from online blogs and a
|
41 |
-
photography-related encyclopedia. Following the recommendation of the authors, we evaluate using
|
42 |
-
accuracy."""
|
43 |
-
|
44 |
-
_RECORD_DESCRIPTION = """\
|
45 |
-
(Reading Comprehension with Commonsense Reasoning Dataset, Zhang et al., 2018) is a
|
46 |
-
multiple-choice QA task. Each example consists of a news article and a Cloze-style question about
|
47 |
-
the article in which one entity is masked out. The system must predict the masked out entity from a
|
48 |
-
given list of possible entities in the provided passage, where the same entity may be expressed using
|
49 |
-
multiple different surface forms, all of which are considered correct. Articles are drawn from CNN
|
50 |
-
and Daily Mail. Following the original work, we evaluate with max (over all mentions) token-level
|
51 |
-
F1 and exact match (EM)."""
|
52 |
-
|
53 |
-
_RTE_DESCRIPTION = """\
|
54 |
-
The Recognizing Textual Entailment (RTE) datasets come from a series of annual competitions
|
55 |
-
on textual entailment, the problem of predicting whether a given premise sentence entails a given
|
56 |
-
hypothesis sentence (also known as natural language inference, NLI). RTE was previously included
|
57 |
-
in GLUE, and we use the same data and format as before: We merge data from RTE1 (Dagan
|
58 |
-
et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli
|
59 |
-
et al., 2009). All datasets are combined and converted to two-class classification: entailment and
|
60 |
-
not_entailment. Of all the GLUE tasks, RTE was among those that benefited from transfer learning
|
61 |
-
the most, jumping from near random-chance performance (~56%) at the time of GLUE's launch to
|
62 |
-
85% accuracy (Liu et al., 2019c) at the time of writing. Given the eight point gap with respect to
|
63 |
-
human performance, however, the task is not yet solved by machines, and we expect the remaining
|
64 |
-
gap to be difficult to close."""
|
65 |
-
|
66 |
-
_MULTIRC_DESCRIPTION = """\
|
67 |
-
The Multi-Sentence Reading Comprehension dataset (MultiRC, Khashabi et al., 2018)
|
68 |
-
is a true/false question-answering task. Each example consists of a context paragraph, a question
|
69 |
-
about that paragraph, and a list of possible answers to that question which must be labeled as true or
|
70 |
-
false. Question-answering (QA) is a popular problem with many datasets. We use MultiRC because
|
71 |
-
of a number of desirable properties: (i) each question can have multiple possible correct answers,
|
72 |
-
so each question-answer pair must be evaluated independent of other pairs, (ii) the questions are
|
73 |
-
designed such that answering each question requires drawing facts from multiple context sentences,
|
74 |
-
and (iii) the question-answer pair format more closely matches the API of other SuperGLUE tasks
|
75 |
-
than span-based extractive QA does. The paragraphs are drawn from seven domains including news,
|
76 |
-
fiction, and historical text."""
|
77 |
-
|
78 |
-
_WIC_DESCRIPTION = """\
|
79 |
-
The Word-in-Context (WiC, Pilehvar and Camacho-Collados, 2019) dataset supports a word
|
80 |
-
sense disambiguation task cast as binary classification over sentence pairs. Given two sentences and a
|
81 |
-
polysemous (sense-ambiguous) word that appears in both sentences, the task is to determine whether
|
82 |
-
the word is used with the same sense in both sentences. Sentences are drawn from WordNet (Miller,
|
83 |
-
1995), VerbNet (Schuler, 2005), and Wiktionary. We follow the original work and evaluate using
|
84 |
-
accuracy."""
|
85 |
-
|
86 |
-
_WSC_DESCRIPTION = """\
|
87 |
-
The Winograd Schema Challenge (WSC, Levesque et al., 2012) is a reading comprehension
|
88 |
-
task in which a system must read a sentence with a pronoun and select the referent of that pronoun
|
89 |
-
from a list of choices. Given the difficulty of this task and the headroom still left, we have included
|
90 |
-
WSC in SuperGLUE and recast the dataset into its coreference form. The task is cast as a binary
|
91 |
-
classification problem, as opposed to N-multiple choice, in order to isolate the model's ability to
|
92 |
-
understand the coreference links within a sentence as opposed to various other strategies that may
|
93 |
-
come into play in multiple choice conditions. With that in mind, we create a split with 65% negative
|
94 |
-
majority class in the validation set, reflecting the distribution of the hidden test set, and 52% negative
|
95 |
-
class in the training set. The training and validation examples are drawn from the original Winograd
|
96 |
-
Schema dataset (Levesque et al., 2012), as well as those distributed by the affiliated organization
|
97 |
-
Commonsense Reasoning. The test examples are derived from fiction books and have been shared
|
98 |
-
with us by the authors of the original dataset. Previously, a version of WSC recast as NLI as included
|
99 |
-
in GLUE, known as WNLI. No substantial progress was made on WNLI, with many submissions
|
100 |
-
opting to submit only majority class predictions. WNLI was made especially difficult due to an
|
101 |
-
adversarial train/dev split: Premise sentences that appeared in the training set sometimes appeared
|
102 |
-
in the development set with a different hypothesis and a flipped label. If a system memorized the
|
103 |
-
training set without meaningfully generalizing, which was easy due to the small size of the training
|
104 |
-
set, it could perform far below chance on the development set. We remove this adversarial design
|
105 |
-
in the SuperGLUE version of WSC by ensuring that no sentences are shared between the training,
|
106 |
-
validation, and test sets.
|
107 |
-
However, the validation and test sets come from different domains, with the validation set consisting
|
108 |
-
of ambiguous examples such that changing one non-noun phrase word will change the coreference
|
109 |
-
dependencies in the sentence. The test set consists only of more straightforward examples, with a
|
110 |
-
high number of noun phrases (and thus more choices for the model), but low to no ambiguity."""
|
111 |
-
|
112 |
-
_AXB_DESCRIPTION = """\
|
113 |
-
An expert-constructed,
|
114 |
-
diagnostic dataset that automatically tests models for a broad range of linguistic, commonsense, and
|
115 |
-
world knowledge. Each example in this broad-coverage diagnostic is a sentence pair labeled with
|
116 |
-
a three-way entailment relation (entailment, neutral, or contradiction) and tagged with labels that
|
117 |
-
indicate the phenomena that characterize the relationship between the two sentences. Submissions
|
118 |
-
to the GLUE leaderboard are required to include predictions from the submission's MultiNLI
|
119 |
-
classifier on the diagnostic dataset, and analyses of the results were shown alongside the main
|
120 |
-
leaderboard. Since this broad-coverage diagnostic task has proved difficult for top models, we retain
|
121 |
-
it in SuperGLUE. However, since MultiNLI is not part of SuperGLUE, we collapse contradiction
|
122 |
-
and neutral into a single not_entailment label, and request that submissions include predictions
|
123 |
-
on the resulting set from the model used for the RTE task.
|
124 |
-
"""
|
125 |
-
|
126 |
-
_AXG_DESCRIPTION = """\
|
127 |
-
Winogender is designed to measure gender
|
128 |
-
bias in coreference resolution systems. We use the Diverse Natural Language Inference Collection
|
129 |
-
(DNC; Poliak et al., 2018) version that casts Winogender as a textual entailment task. Each example
|
130 |
-
consists of a premise sentence with a male or female pronoun and a hypothesis giving a possible
|
131 |
-
antecedent of the pronoun. Examples occur in minimal pairs, where the only difference between
|
132 |
-
an example and its pair is the gender of the pronoun in the premise. Performance on Winogender
|
133 |
-
is measured with both accuracy and the gender parity score: the percentage of minimal pairs for
|
134 |
-
which the predictions are the same. We note that a system can trivially obtain a perfect gender parity
|
135 |
-
score by guessing the same class for all examples, so a high gender parity score is meaningless unless
|
136 |
-
accompanied by high accuracy. As a diagnostic test of gender bias, we view the schemas as having high
|
137 |
-
positive predictive value and low negative predictive value; that is, they may demonstrate the presence
|
138 |
-
of gender bias in a system, but not prove its absence.
|
139 |
-
"""
|
140 |
-
|
141 |
-
_BOOLQ_CITATION = """\
|
142 |
-
@inproceedings{clark2019boolq,
|
143 |
-
title={BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions},
|
144 |
-
author={Clark, Christopher and Lee, Kenton and Chang, Ming-Wei, and Kwiatkowski, Tom and Collins, Michael, and Toutanova, Kristina},
|
145 |
-
booktitle={NAACL},
|
146 |
-
year={2019}
|
147 |
-
}"""
|
148 |
-
|
149 |
-
_CB_CITATION = """\
|
150 |
-
@article{de marneff_simons_tonhauser_2019,
|
151 |
-
title={The CommitmentBank: Investigating projection in naturally occurring discourse},
|
152 |
-
journal={proceedings of Sinn und Bedeutung 23},
|
153 |
-
author={De Marneff, Marie-Catherine and Simons, Mandy and Tonhauser, Judith},
|
154 |
-
year={2019}
|
155 |
-
}"""
|
156 |
-
|
157 |
-
_COPA_CITATION = """\
|
158 |
-
@inproceedings{roemmele2011choice,
|
159 |
-
title={Choice of plausible alternatives: An evaluation of commonsense causal reasoning},
|
160 |
-
author={Roemmele, Melissa and Bejan, Cosmin Adrian and Gordon, Andrew S},
|
161 |
-
booktitle={2011 AAAI Spring Symposium Series},
|
162 |
-
year={2011}
|
163 |
-
}"""
|
164 |
-
|
165 |
-
_RECORD_CITATION = """\
|
166 |
-
@article{zhang2018record,
|
167 |
-
title={Record: Bridging the gap between human and machine commonsense reading comprehension},
|
168 |
-
author={Zhang, Sheng and Liu, Xiaodong and Liu, Jingjing and Gao, Jianfeng and Duh, Kevin and Van Durme, Benjamin},
|
169 |
-
journal={arXiv preprint arXiv:1810.12885},
|
170 |
-
year={2018}
|
171 |
-
}"""
|
172 |
-
|
173 |
-
_RTE_CITATION = """\
|
174 |
-
@inproceedings{dagan2005pascal,
|
175 |
-
title={The PASCAL recognising textual entailment challenge},
|
176 |
-
author={Dagan, Ido and Glickman, Oren and Magnini, Bernardo},
|
177 |
-
booktitle={Machine Learning Challenges Workshop},
|
178 |
-
pages={177--190},
|
179 |
-
year={2005},
|
180 |
-
organization={Springer}
|
181 |
-
}
|
182 |
-
@inproceedings{bar2006second,
|
183 |
-
title={The second pascal recognising textual entailment challenge},
|
184 |
-
author={Bar-Haim, Roy and Dagan, Ido and Dolan, Bill and Ferro, Lisa and Giampiccolo, Danilo and Magnini, Bernardo and Szpektor, Idan},
|
185 |
-
booktitle={Proceedings of the second PASCAL challenges workshop on recognising textual entailment},
|
186 |
-
volume={6},
|
187 |
-
number={1},
|
188 |
-
pages={6--4},
|
189 |
-
year={2006},
|
190 |
-
organization={Venice}
|
191 |
-
}
|
192 |
-
@inproceedings{giampiccolo2007third,
|
193 |
-
title={The third pascal recognizing textual entailment challenge},
|
194 |
-
author={Giampiccolo, Danilo and Magnini, Bernardo and Dagan, Ido and Dolan, Bill},
|
195 |
-
booktitle={Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing},
|
196 |
-
pages={1--9},
|
197 |
-
year={2007},
|
198 |
-
organization={Association for Computational Linguistics}
|
199 |
-
}
|
200 |
-
@inproceedings{bentivogli2009fifth,
|
201 |
-
title={The Fifth PASCAL Recognizing Textual Entailment Challenge.},
|
202 |
-
author={Bentivogli, Luisa and Clark, Peter and Dagan, Ido and Giampiccolo, Danilo},
|
203 |
-
booktitle={TAC},
|
204 |
-
year={2009}
|
205 |
-
}"""
|
206 |
-
|
207 |
-
_MULTIRC_CITATION = """\
|
208 |
-
@inproceedings{MultiRC2018,
|
209 |
-
author = {Daniel Khashabi and Snigdha Chaturvedi and Michael Roth and Shyam Upadhyay and Dan Roth},
|
210 |
-
title = {Looking Beyond the Surface:A Challenge Set for Reading Comprehension over Multiple Sentences},
|
211 |
-
booktitle = {Proceedings of North American Chapter of the Association for Computational Linguistics (NAACL)},
|
212 |
-
year = {2018}
|
213 |
-
}"""
|
214 |
-
|
215 |
-
_WIC_CITATION = """\
|
216 |
-
@article{DBLP:journals/corr/abs-1808-09121,
|
217 |
-
author={Mohammad Taher Pilehvar and os{\'{e}} Camacho{-}Collados},
|
218 |
-
title={WiC: 10, 000 Example Pairs for Evaluating Context-Sensitive Representations},
|
219 |
-
journal={CoRR},
|
220 |
-
volume={abs/1808.09121},
|
221 |
-
year={2018},
|
222 |
-
url={http://arxiv.org/abs/1808.09121},
|
223 |
-
archivePrefix={arXiv},
|
224 |
-
eprint={1808.09121},
|
225 |
-
timestamp={Mon, 03 Sep 2018 13:36:40 +0200},
|
226 |
-
biburl={https://dblp.org/rec/bib/journals/corr/abs-1808-09121},
|
227 |
-
bibsource={dblp computer science bibliography, https://dblp.org}
|
228 |
-
}"""
|
229 |
-
|
230 |
-
_WSC_CITATION = """\
|
231 |
-
@inproceedings{levesque2012winograd,
|
232 |
-
title={The winograd schema challenge},
|
233 |
-
author={Levesque, Hector and Davis, Ernest and Morgenstern, Leora},
|
234 |
-
booktitle={Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning},
|
235 |
-
year={2012}
|
236 |
-
}"""
|
237 |
-
|
238 |
-
_AXG_CITATION = """\
|
239 |
-
@inproceedings{rudinger-EtAl:2018:N18,
|
240 |
-
author = {Rudinger, Rachel and Naradowsky, Jason and Leonard, Brian and {Van Durme}, Benjamin},
|
241 |
-
title = {Gender Bias in Coreference Resolution},
|
242 |
-
booktitle = {Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
|
243 |
-
month = {June},
|
244 |
-
year = {2018},
|
245 |
-
address = {New Orleans, Louisiana},
|
246 |
-
publisher = {Association for Computational Linguistics}
|
247 |
-
}
|
248 |
-
"""
|
249 |
-
|
250 |
-
|
251 |
-
class SuperGlueConfig(datasets.BuilderConfig):
|
252 |
-
"""BuilderConfig for SuperGLUE."""
|
253 |
-
|
254 |
-
def __init__(self, **kwargs):
|
255 |
-
"""BuilderConfig for SuperGLUE.
|
256 |
-
Args:
|
257 |
-
features: `list[string]`, list of the features that will appear in the
|
258 |
-
feature dict. Should not include "label".
|
259 |
-
data_url: `string`, url to download the zip file from.
|
260 |
-
citation: `string`, citation for the data set.
|
261 |
-
url: `string`, url for information about the data set.
|
262 |
-
label_classes: `list[string]`, the list of classes for the label if the
|
263 |
-
label is present as a string. Non-string labels will be cast to either
|
264 |
-
'False' or 'True'.
|
265 |
-
**kwargs: keyword arguments forwarded to super.
|
266 |
-
"""
|
267 |
-
# Version history:
|
268 |
-
# 1.0.3: Fix not including entity position in ReCoRD.
|
269 |
-
# 1.0.2: Fixed non-nondeterminism in ReCoRD.
|
270 |
-
# 1.0.1: Change from the pre-release trial version of SuperGLUE (v1.9) to
|
271 |
-
# the full release (v2.0).
|
272 |
-
# 1.0.0: S3 (new shuffling, sharding and slicing mechanism).
|
273 |
-
# 0.0.2: Initial version.
|
274 |
-
super(SuperGlueConfig, self).__init__(version=datasets.Version("1.0.3"), **kwargs)
|
275 |
-
self.features = datasets.Features({"text": datasets.Value("string"), "label":datasets.Value("int32")})
|
276 |
-
self.label_classes = [0, 1]
|
277 |
-
|
278 |
-
|
279 |
-
class SuperGlue(datasets.GeneratorBasedBuilder):
|
280 |
-
"""The SuperGLUE benchmark."""
|
281 |
-
|
282 |
-
BUILDER_CONFIGS = [
|
283 |
-
SuperGlueConfig(
|
284 |
-
name="fake_news",
|
285 |
-
description=_BOOLQ_DESCRIPTION,
|
286 |
-
),
|
287 |
-
SuperGlueConfig(
|
288 |
-
name="job_scams",
|
289 |
-
description=_CB_DESCRIPTION,
|
290 |
-
),
|
291 |
-
SuperGlueConfig(
|
292 |
-
name="phishing",
|
293 |
-
description=_COPA_DESCRIPTION,
|
294 |
-
),
|
295 |
-
SuperGlueConfig(
|
296 |
-
name="political_statements",
|
297 |
-
description=_MULTIRC_DESCRIPTION,
|
298 |
-
),
|
299 |
-
SuperGlueConfig(
|
300 |
-
name="product_reviews",
|
301 |
-
description=_RECORD_DESCRIPTION,
|
302 |
-
),
|
303 |
-
SuperGlueConfig(
|
304 |
-
name="sms",
|
305 |
-
description=_RTE_DESCRIPTION,
|
306 |
-
),
|
307 |
-
SuperGlueConfig(
|
308 |
-
name="twitter_rumours",
|
309 |
-
description=_WIC_DESCRIPTION,
|
310 |
-
),
|
311 |
-
]
|
312 |
-
|
313 |
-
def _info(self):
|
314 |
-
features = {feature: datasets.Value("string") for feature in self.config.features}
|
315 |
-
|
316 |
-
features["label"] = datasets.features.ClassLabel(names=self.config.label_classes)
|
317 |
-
|
318 |
-
return datasets.DatasetInfo(
|
319 |
-
description=_GLUE_DESCRIPTION + self.config.description,
|
320 |
-
features=datasets.Features(features),
|
321 |
-
)
|
322 |
-
|
323 |
-
def _split_generators(self, dl_manager):
|
324 |
-
return [
|
325 |
-
datasets.SplitGenerator(
|
326 |
-
name=datasets.Split.TRAIN,
|
327 |
-
gen_kwargs={
|
328 |
-
"data_file": os.path.join(self.config.name, "train.jsonl"),
|
329 |
-
"split": datasets.Split.TRAIN,
|
330 |
-
},
|
331 |
-
),
|
332 |
-
datasets.SplitGenerator(
|
333 |
-
name=datasets.Split.VALIDATION,
|
334 |
-
gen_kwargs={
|
335 |
-
"data_file": os.path.join(self.config.name, "validation.jsonl"),
|
336 |
-
"split": datasets.Split.VALIDATION,
|
337 |
-
},
|
338 |
-
),
|
339 |
-
datasets.SplitGenerator(
|
340 |
-
name=datasets.Split.TEST,
|
341 |
-
gen_kwargs={
|
342 |
-
"data_file": os.path.join(self.config.name, "test.jsonl"),
|
343 |
-
"split": datasets.Split.TEST,
|
344 |
-
},
|
345 |
-
),
|
346 |
-
]
|
347 |
-
|
348 |
-
def _generate_examples(self, data_file, split):
|
349 |
-
with open(data_file, encoding="utf-8") as f:
|
350 |
-
for line in f:
|
351 |
-
row = json.loads(line)
|
352 |
-
|
353 |
-
if self.config.name == "multirc":
|
354 |
-
paragraph = row["passage"]
|
355 |
-
for question in paragraph["questions"]:
|
356 |
-
for answer in question["answers"]:
|
357 |
-
label = answer.get("label")
|
358 |
-
key = "%s_%s_%s" % (row["idx"], question["idx"], answer["idx"])
|
359 |
-
yield key, {
|
360 |
-
"paragraph": paragraph["text"],
|
361 |
-
"question": question["question"],
|
362 |
-
"answer": answer["text"],
|
363 |
-
"label": -1 if label is None else _cast_label(bool(label)),
|
364 |
-
"idx": {"paragraph": row["idx"], "question": question["idx"], "answer": answer["idx"]},
|
365 |
-
}
|
366 |
-
elif self.config.name == "record":
|
367 |
-
passage = row["passage"]
|
368 |
-
entity_texts, entity_spans = _get_record_entities(passage)
|
369 |
-
for qa in row["qas"]:
|
370 |
-
yield qa["idx"], {
|
371 |
-
"passage": passage["text"],
|
372 |
-
"query": qa["query"],
|
373 |
-
"entities": entity_texts,
|
374 |
-
"entity_spans": entity_spans,
|
375 |
-
"answers": _get_record_answers(qa),
|
376 |
-
"idx": {"passage": row["idx"], "query": qa["idx"]},
|
377 |
-
}
|
378 |
-
else:
|
379 |
-
if self.config.name.startswith("wsc"):
|
380 |
-
row.update(row["target"])
|
381 |
-
example = {feature: row[feature] for feature in self.config.features}
|
382 |
-
if self.config.name == "wsc.fixed":
|
383 |
-
example = _fix_wst(example)
|
384 |
-
example["idx"] = row["idx"]
|
385 |
-
|
386 |
-
if "label" in row:
|
387 |
-
if self.config.name == "copa":
|
388 |
-
example["label"] = "choice2" if row["label"] else "choice1"
|
389 |
-
else:
|
390 |
-
example["label"] = _cast_label(row["label"])
|
391 |
-
else:
|
392 |
-
assert split == datasets.Split.TEST, row
|
393 |
-
example["label"] = -1
|
394 |
-
yield example["idx"], example
|
395 |
-
|
396 |
-
|
397 |
-
def _fix_wst(ex):
|
398 |
-
"""Fixes most cases where spans are not actually substrings of text."""
|
399 |
-
|
400 |
-
def _fix_span_text(k):
|
401 |
-
"""Fixes a single span."""
|
402 |
-
text = ex[k + "_text"]
|
403 |
-
index = ex[k + "_index"]
|
404 |
-
|
405 |
-
if text in ex["text"]:
|
406 |
-
return
|
407 |
-
|
408 |
-
if text in ("Kamenev and Zinoviev", "Kamenev, Zinoviev, and Stalin"):
|
409 |
-
# There is no way to correct these examples since the subjects have
|
410 |
-
# intervening text.
|
411 |
-
return
|
412 |
-
|
413 |
-
if "theyscold" in text:
|
414 |
-
ex["text"].replace("theyscold", "they scold")
|
415 |
-
ex["span2_index"] = 10
|
416 |
-
# Make sure case of the first words match.
|
417 |
-
first_word = ex["text"].split()[index]
|
418 |
-
if first_word[0].islower():
|
419 |
-
text = text[0].lower() + text[1:]
|
420 |
-
else:
|
421 |
-
text = text[0].upper() + text[1:]
|
422 |
-
# Remove punctuation in span.
|
423 |
-
text = text.rstrip(".")
|
424 |
-
# Replace incorrect whitespace character in span.
|
425 |
-
text = text.replace("\n", " ")
|
426 |
-
ex[k + "_text"] = text
|
427 |
-
assert ex[k + "_text"] in ex["text"], ex
|
428 |
-
|
429 |
-
_fix_span_text("span1")
|
430 |
-
_fix_span_text("span2")
|
431 |
-
return ex
|
432 |
-
|
433 |
-
|
434 |
-
def _cast_label(label):
|
435 |
-
"""Converts the label into the appropriate string version."""
|
436 |
-
if isinstance(label, str):
|
437 |
-
return label
|
438 |
-
elif isinstance(label, bool):
|
439 |
-
return "True" if label else "False"
|
440 |
-
elif isinstance(label, int):
|
441 |
-
assert label in (0, 1)
|
442 |
-
return str(label)
|
443 |
-
else:
|
444 |
-
raise ValueError("Invalid label format.")
|
445 |
-
|
446 |
-
|
447 |
-
def _get_record_entities(passage):
|
448 |
-
"""Returns the unique set of entities."""
|
449 |
-
text = passage["text"]
|
450 |
-
entity_spans = list()
|
451 |
-
for entity in passage["entities"]:
|
452 |
-
entity_text = text[entity["start"] : entity["end"] + 1]
|
453 |
-
entity_spans.append({"text": entity_text, "start": entity["start"], "end": entity["end"] + 1})
|
454 |
-
entity_spans = sorted(entity_spans, key=lambda e: e["start"]) # sort by start index
|
455 |
-
entity_texts = set(e["text"] for e in entity_spans) # for backward compatability
|
456 |
-
return entity_texts, entity_spans
|
457 |
-
|
458 |
-
|
459 |
-
def _get_record_answers(qa):
|
460 |
-
"""Returns the unique set of answers."""
|
461 |
-
if "answers" not in qa:
|
462 |
-
return []
|
463 |
-
answers = set()
|
464 |
-
for answer in qa["answers"]:
|
465 |
-
answers.add(answer["text"])
|
466 |
-
return sorted(answers)
|
467 |
-
|
468 |
-
|
469 |
-
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
|
470 |
-
#
|
471 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
472 |
-
# you may not use this file except in compliance with the License.
|
473 |
-
# You may obtain a copy of the License at
|
474 |
-
#
|
475 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
476 |
-
#
|
477 |
-
# Unless required by applicable law or agreed to in writing, software
|
478 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
479 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
480 |
-
# See the License for the specific language governing permissions and
|
481 |
-
# limitations under the License.
|
482 |
-
# TODO: Address all TODOs and remove all explanatory comments
|
483 |
-
"""TODO: Add a description here."""
|
484 |
-
|
485 |
|
486 |
import csv
|
487 |
import json
|
488 |
import os
|
489 |
-
|
490 |
import datasets
|
491 |
|
492 |
|
@@ -510,7 +26,7 @@ _LICENSE = ""
|
|
510 |
|
511 |
|
512 |
# TODO: Name of the dataset usually matches the script name with CamelCase instead of snake_case
|
513 |
-
class
|
514 |
"""TODO: Short description of my dataset."""
|
515 |
|
516 |
VERSION = datasets.Version("2.1.0")
|
@@ -568,12 +84,13 @@ class NewDataset(datasets.GeneratorBasedBuilder):
|
|
568 |
# dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS
|
569 |
# It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
|
570 |
# By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
|
|
|
571 |
return [
|
572 |
datasets.SplitGenerator(
|
573 |
name=datasets.Split.TRAIN,
|
574 |
# These kwargs will be passed to _generate_examples
|
575 |
gen_kwargs={
|
576 |
-
"filepath": os.path.join(self.config.name, "train.jsonl"),
|
577 |
"split": "train",
|
578 |
},
|
579 |
),
|
@@ -581,15 +98,15 @@ class NewDataset(datasets.GeneratorBasedBuilder):
|
|
581 |
name=datasets.Split.VALIDATION,
|
582 |
# These kwargs will be passed to _generate_examples
|
583 |
gen_kwargs={
|
584 |
-
"filepath": os.path.join(self.config.name, "validate.jsonl"),
|
585 |
"split": "dev",
|
586 |
},
|
587 |
),
|
588 |
datasets.SplitGenerator(
|
589 |
name=datasets.Split.TEST,
|
590 |
-
|
591 |
gen_kwargs={
|
592 |
-
"filepath": os.path.join(self.config.name, "test.jsonl"),
|
593 |
"split": "test"
|
594 |
},
|
595 |
),
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
|
2 |
import csv
|
3 |
import json
|
4 |
import os
|
5 |
+
import sys
|
6 |
import datasets
|
7 |
|
8 |
|
|
|
26 |
|
27 |
|
28 |
# TODO: Name of the dataset usually matches the script name with CamelCase instead of snake_case
|
29 |
+
class GDDS(datasets.GeneratorBasedBuilder):
|
30 |
"""TODO: Short description of my dataset."""
|
31 |
|
32 |
VERSION = datasets.Version("2.1.0")
|
|
|
84 |
# dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS
|
85 |
# It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
|
86 |
# By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
|
87 |
+
script_directory = os.path.dirname(os.path.abspath(sys.argv[0]))
|
88 |
return [
|
89 |
datasets.SplitGenerator(
|
90 |
name=datasets.Split.TRAIN,
|
91 |
# These kwargs will be passed to _generate_examples
|
92 |
gen_kwargs={
|
93 |
+
"filepath": os.path.join(script_directory, self.config.name, "train.jsonl"),
|
94 |
"split": "train",
|
95 |
},
|
96 |
),
|
|
|
98 |
name=datasets.Split.VALIDATION,
|
99 |
# These kwargs will be passed to _generate_examples
|
100 |
gen_kwargs={
|
101 |
+
"filepath": os.path.join(script_directory, self.config.name, "validate.jsonl"),
|
102 |
"split": "dev",
|
103 |
},
|
104 |
),
|
105 |
datasets.SplitGenerator(
|
106 |
name=datasets.Split.TEST,
|
107 |
+
# These kwargs will be passed to _generate_examples
|
108 |
gen_kwargs={
|
109 |
+
"filepath": os.path.join(script_directory, self.config.name, "test.jsonl"),
|
110 |
"split": "test"
|
111 |
},
|
112 |
),
|