Datasets:

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
bfattori commited on
Commit
ec2c24c
·
1 Parent(s): f477a1b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +332 -0
README.md ADDED
@@ -0,0 +1,332 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language:
5
+ - en
6
+ language_creators:
7
+ - found
8
+ license:
9
+ - other
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: RACE
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - multiple-choice
19
+ task_ids:
20
+ - multiple-choice-qa
21
+ paperswithcode_id: race
22
+ dataset_info:
23
+ - config_name: high
24
+ features:
25
+ - name: example_id
26
+ dtype: string
27
+ - name: article
28
+ dtype: string
29
+ - name: answer
30
+ dtype: string
31
+ - name: question
32
+ dtype: string
33
+ - name: options
34
+ sequence: string
35
+ splits:
36
+ - name: test
37
+ num_bytes: 6989121
38
+ num_examples: 3498
39
+ - name: train
40
+ num_bytes: 126243396
41
+ num_examples: 62445
42
+ - name: validation
43
+ num_bytes: 6885287
44
+ num_examples: 3451
45
+ download_size: 25443609
46
+ dataset_size: 140117804
47
+ - config_name: middle
48
+ features:
49
+ - name: example_id
50
+ dtype: string
51
+ - name: article
52
+ dtype: string
53
+ - name: answer
54
+ dtype: string
55
+ - name: question
56
+ dtype: string
57
+ - name: options
58
+ sequence: string
59
+ splits:
60
+ - name: test
61
+ num_bytes: 1786297
62
+ num_examples: 1436
63
+ - name: train
64
+ num_bytes: 31065322
65
+ num_examples: 25421
66
+ - name: validation
67
+ num_bytes: 1761937
68
+ num_examples: 1436
69
+ download_size: 25443609
70
+ dataset_size: 34613556
71
+ - config_name: all
72
+ features:
73
+ - name: example_id
74
+ dtype: string
75
+ - name: article
76
+ dtype: string
77
+ - name: answer
78
+ dtype: string
79
+ - name: question
80
+ dtype: string
81
+ - name: options
82
+ sequence: string
83
+ splits:
84
+ - name: test
85
+ num_bytes: 8775394
86
+ num_examples: 4934
87
+ - name: train
88
+ num_bytes: 157308694
89
+ num_examples: 87866
90
+ - name: validation
91
+ num_bytes: 8647200
92
+ num_examples: 4887
93
+ download_size: 25443609
94
+ dataset_size: 174731288
95
+ ---
96
+
97
+
98
+ # "race" Grouped by Article
99
+
100
+ This is a modified version of https://huggingface.co/datasets/race that returns documents grouped by article context instead of by question. The original readme is contained below.
101
+
102
+ # Dataset Card for "race"
103
+
104
+ ## Table of Contents
105
+ - [Dataset Description](#dataset-description)
106
+ - [Dataset Summary](#dataset-summary)
107
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
108
+ - [Languages](#languages)
109
+ - [Dataset Structure](#dataset-structure)
110
+ - [Data Instances](#data-instances)
111
+ - [Data Fields](#data-fields)
112
+ - [Data Splits](#data-splits)
113
+ - [Dataset Creation](#dataset-creation)
114
+ - [Curation Rationale](#curation-rationale)
115
+ - [Source Data](#source-data)
116
+ - [Annotations](#annotations)
117
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
118
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
119
+ - [Social Impact of Dataset](#social-impact-of-dataset)
120
+ - [Discussion of Biases](#discussion-of-biases)
121
+ - [Other Known Limitations](#other-known-limitations)
122
+ - [Additional Information](#additional-information)
123
+ - [Dataset Curators](#dataset-curators)
124
+ - [Licensing Information](#licensing-information)
125
+ - [Citation Information](#citation-information)
126
+ - [Contributions](#contributions)
127
+
128
+ ## Dataset Description
129
+
130
+ - **Homepage:** [http://www.cs.cmu.edu/~glai1/data/race/](http://www.cs.cmu.edu/~glai1/data/race/)
131
+ - **Repository:** https://github.com/qizhex/RACE_AR_baselines
132
+ - **Paper:** [RACE: Large-scale ReAding Comprehension Dataset From Examinations](https://arxiv.org/abs/1704.04683)
133
+ - **Point of Contact:** [Guokun Lai](mailto:[email protected]), [Qizhe Xie](mailto:[email protected])
134
+ - **Size of downloaded dataset files:** 76.33 MB
135
+ - **Size of the generated dataset:** 349.46 MB
136
+ - **Total amount of disk used:** 425.80 MB
137
+
138
+ ### Dataset Summary
139
+
140
+ RACE is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The
141
+ dataset is collected from English examinations in China, which are designed for middle school and high school students.
142
+ The dataset can be served as the training and test sets for machine comprehension.
143
+
144
+ ### Supported Tasks and Leaderboards
145
+
146
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
147
+
148
+ ### Languages
149
+
150
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
151
+
152
+ ## Dataset Structure
153
+
154
+ ### Data Instances
155
+
156
+ #### all
157
+
158
+ - **Size of downloaded dataset files:** 25.44 MB
159
+ - **Size of the generated dataset:** 174.73 MB
160
+ - **Total amount of disk used:** 200.17 MB
161
+
162
+ An example of 'train' looks as follows.
163
+ ```
164
+ This example was too long and was cropped:
165
+
166
+ {
167
+ "answer": "A",
168
+ "article": "\"Schoolgirls have been wearing such short skirts at Paget High School in Branston that they've been ordered to wear trousers ins...",
169
+ "example_id": "high132.txt",
170
+ "options": ["short skirts give people the impression of sexualisation", "short skirts are too expensive for parents to afford", "the headmaster doesn't like girls wearing short skirts", "the girls wearing short skirts will be at the risk of being laughed at"],
171
+ "question": "The girls at Paget High School are not allowed to wear skirts in that _ ."
172
+ }
173
+ ```
174
+
175
+ #### high
176
+
177
+ - **Size of downloaded dataset files:** 25.44 MB
178
+ - **Size of the generated dataset:** 140.12 MB
179
+ - **Total amount of disk used:** 165.56 MB
180
+
181
+ An example of 'train' looks as follows.
182
+ ```
183
+ This example was too long and was cropped:
184
+
185
+ {
186
+ "answer": "A",
187
+ "article": "\"Schoolgirls have been wearing such short skirts at Paget High School in Branston that they've been ordered to wear trousers ins...",
188
+ "example_id": "high132.txt",
189
+ "options": ["short skirts give people the impression of sexualisation", "short skirts are too expensive for parents to afford", "the headmaster doesn't like girls wearing short skirts", "the girls wearing short skirts will be at the risk of being laughed at"],
190
+ "question": "The girls at Paget High School are not allowed to wear skirts in that _ ."
191
+ }
192
+ ```
193
+
194
+ #### middle
195
+
196
+ - **Size of downloaded dataset files:** 25.44 MB
197
+ - **Size of the generated dataset:** 34.61 MB
198
+ - **Total amount of disk used:** 60.05 MB
199
+
200
+ An example of 'train' looks as follows.
201
+ ```
202
+ This example was too long and was cropped:
203
+
204
+ {
205
+ "answer": "B",
206
+ "article": "\"There is not enough oil in the world now. As time goes by, it becomes less and less, so what are we going to do when it runs ou...",
207
+ "example_id": "middle3.txt",
208
+ "options": ["There is more petroleum than we can use now.", "Trees are needed for some other things besides making gas.", "We got electricity from ocean tides in the old days.", "Gas wasn't used to run cars in the Second World War."],
209
+ "question": "According to the passage, which of the following statements is TRUE?"
210
+ }
211
+ ```
212
+
213
+ ### Data Fields
214
+
215
+ The data fields are the same among all splits.
216
+
217
+ #### all
218
+ - `example_id`: a `string` feature.
219
+ - `article`: a `string` feature.
220
+ - `answer`: a `string` feature.
221
+ - `question`: a `string` feature.
222
+ - `options`: a `list` of `string` features.
223
+
224
+ #### high
225
+ - `example_id`: a `string` feature.
226
+ - `article`: a `string` feature.
227
+ - `answer`: a `string` feature.
228
+ - `question`: a `string` feature.
229
+ - `options`: a `list` of `string` features.
230
+
231
+ #### middle
232
+ - `example_id`: a `string` feature.
233
+ - `article`: a `string` feature.
234
+ - `answer`: a `string` feature.
235
+ - `question`: a `string` feature.
236
+ - `options`: a `list` of `string` features.
237
+
238
+ ### Data Splits
239
+
240
+ | name |train|validation|test|
241
+ |------|----:|---------:|---:|
242
+ |all |87866| 4887|4934|
243
+ |high |62445| 3451|3498|
244
+ |middle|25421| 1436|1436|
245
+
246
+ ## Dataset Creation
247
+
248
+ ### Curation Rationale
249
+
250
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
251
+
252
+ ### Source Data
253
+
254
+ #### Initial Data Collection and Normalization
255
+
256
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
257
+
258
+ #### Who are the source language producers?
259
+
260
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
261
+
262
+ ### Annotations
263
+
264
+ #### Annotation process
265
+
266
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
267
+
268
+ #### Who are the annotators?
269
+
270
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
271
+
272
+ ### Personal and Sensitive Information
273
+
274
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
275
+
276
+ ## Considerations for Using the Data
277
+
278
+ ### Social Impact of Dataset
279
+
280
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
281
+
282
+ ### Discussion of Biases
283
+
284
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
285
+
286
+ ### Other Known Limitations
287
+
288
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
289
+
290
+ ## Additional Information
291
+
292
+ ### Dataset Curators
293
+
294
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
295
+
296
+ ### Licensing Information
297
+
298
+ http://www.cs.cmu.edu/~glai1/data/race/
299
+
300
+ 1. RACE dataset is available for non-commercial research purpose only.
301
+
302
+ 2. All passages are obtained from the Internet which is not property of Carnegie Mellon University. We are not responsible for the content nor the meaning of these passages.
303
+
304
+ 3. You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purpose, any portion of the contexts and any portion of derived data.
305
+
306
+ 4. We reserve the right to terminate your access to the RACE dataset at any time.
307
+
308
+ ### Citation Information
309
+
310
+ ```
311
+ @inproceedings{lai-etal-2017-race,
312
+ title = "{RACE}: Large-scale {R}e{A}ding Comprehension Dataset From Examinations",
313
+ author = "Lai, Guokun and
314
+ Xie, Qizhe and
315
+ Liu, Hanxiao and
316
+ Yang, Yiming and
317
+ Hovy, Eduard",
318
+ booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
319
+ month = sep,
320
+ year = "2017",
321
+ address = "Copenhagen, Denmark",
322
+ publisher = "Association for Computational Linguistics",
323
+ url = "https://aclanthology.org/D17-1082",
324
+ doi = "10.18653/v1/D17-1082",
325
+ pages = "785--794",
326
+ }
327
+ ```
328
+
329
+
330
+ ### Contributions
331
+
332
+ Thanks to [@abarbosa94](https://github.com/abarbosa94), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.