attacker-exploiting-everyone commited on
Commit
1f5887e
·
verified ·
1 Parent(s): ea56d6b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +301 -3
README.md CHANGED
@@ -1,3 +1,301 @@
1
- ---
2
- license: openrail
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ - expert-generated
5
+ language_creators:
6
+ - expert-generated
7
+ language:
8
+ - en
9
+ license:
10
+ - unknown
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - question-answering
19
+ task_ids:
20
+ - open-domain-qa
21
+ paperswithcode_id: openbookqa
22
+ pretty_name: OpenBookQA
23
+ dataset_info:
24
+ - config_name: additional
25
+ features:
26
+ - name: id
27
+ dtype: string
28
+ - name: question_stem
29
+ dtype: string
30
+ - name: choices
31
+ sequence:
32
+ - name: text
33
+ dtype: string
34
+ - name: label
35
+ dtype: string
36
+ - name: answerKey
37
+ dtype: string
38
+ - name: fact1
39
+ dtype: string
40
+ - name: humanScore
41
+ dtype: float32
42
+ - name: clarity
43
+ dtype: float32
44
+ - name: turkIdAnonymized
45
+ dtype: string
46
+ splits:
47
+ - name: train
48
+ num_bytes: 1288577
49
+ num_examples: 4957
50
+ - name: validation
51
+ num_bytes: 135916
52
+ num_examples: 500
53
+ - name: test
54
+ num_bytes: 130701
55
+ num_examples: 500
56
+ download_size: 783789
57
+ dataset_size: 1555194
58
+ - config_name: main
59
+ features:
60
+ - name: id
61
+ dtype: string
62
+ - name: question_stem
63
+ dtype: string
64
+ - name: choices
65
+ sequence:
66
+ - name: text
67
+ dtype: string
68
+ - name: label
69
+ dtype: string
70
+ - name: answerKey
71
+ dtype: string
72
+ splits:
73
+ - name: train
74
+ num_bytes: 895386
75
+ num_examples: 4957
76
+ - name: validation
77
+ num_bytes: 95428
78
+ num_examples: 500
79
+ - name: test
80
+ num_bytes: 91759
81
+ num_examples: 500
82
+ download_size: 609613
83
+ dataset_size: 1082573
84
+ configs:
85
+ - config_name: additional
86
+ data_files:
87
+ - split: train
88
+ path: additional/train-*
89
+ - split: validation
90
+ path: additional/validation-*
91
+ - split: test
92
+ path: additional/test-*
93
+ - config_name: main
94
+ data_files:
95
+ - split: train
96
+ path: main/train-*
97
+ - split: validation
98
+ path: main/validation-*
99
+ - split: test
100
+ path: main/test-*
101
+ default: true
102
+ ---
103
+
104
+ # Dataset Card for OpenBookQA
105
+
106
+ ## Table of Contents
107
+ - [Dataset Description](#dataset-description)
108
+ - [Dataset Summary](#dataset-summary)
109
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
110
+ - [Languages](#languages)
111
+ - [Dataset Structure](#dataset-structure)
112
+ - [Data Instances](#data-instances)
113
+ - [Data Fields](#data-fields)
114
+ - [Data Splits](#data-splits)
115
+ - [Dataset Creation](#dataset-creation)
116
+ - [Curation Rationale](#curation-rationale)
117
+ - [Source Data](#source-data)
118
+ - [Annotations](#annotations)
119
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
120
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
121
+ - [Social Impact of Dataset](#social-impact-of-dataset)
122
+ - [Discussion of Biases](#discussion-of-biases)
123
+ - [Other Known Limitations](#other-known-limitations)
124
+ - [Additional Information](#additional-information)
125
+ - [Dataset Curators](#dataset-curators)
126
+ - [Licensing Information](#licensing-information)
127
+ - [Citation Information](#citation-information)
128
+ - [Contributions](#contributions)
129
+
130
+ ## Dataset Description
131
+
132
+ - **Homepage:** [https://allenai.org/data/open-book-qa](https://allenai.org/data/open-book-qa)
133
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
134
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
135
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
136
+ - **Size of downloaded dataset files:** 2.89 MB
137
+ - **Size of the generated dataset:** 2.88 MB
138
+ - **Total amount of disk used:** 5.78 MB
139
+
140
+ ### Dataset Summary
141
+
142
+ OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic
143
+ (with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in. In
144
+ particular, it contains questions that require multi-step reasoning, use of additional common and commonsense knowledge,
145
+ and rich text comprehension.
146
+ OpenBookQA is a new kind of question-answering dataset modeled after open book exams for assessing human understanding of
147
+ a subject.
148
+
149
+ ### Supported Tasks and Leaderboards
150
+
151
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
152
+
153
+ ### Languages
154
+
155
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
156
+
157
+ ## Dataset Structure
158
+
159
+ ### Data Instances
160
+
161
+ #### main
162
+
163
+ - **Size of downloaded dataset files:** 1.45 MB
164
+ - **Size of the generated dataset:** 1.45 MB
165
+ - **Total amount of disk used:** 2.88 MB
166
+
167
+ An example of 'train' looks as follows:
168
+ ```
169
+ {'id': '7-980',
170
+ 'question_stem': 'The sun is responsible for',
171
+ 'choices': {'text': ['puppies learning new tricks',
172
+ 'children growing up and getting old',
173
+ 'flowers wilting in a vase',
174
+ 'plants sprouting, blooming and wilting'],
175
+ 'label': ['A', 'B', 'C', 'D']},
176
+ 'answerKey': 'D'}
177
+ ```
178
+
179
+ #### additional
180
+
181
+ - **Size of downloaded dataset files:** 1.45 MB
182
+ - **Size of the generated dataset:** 1.45 MB
183
+ - **Total amount of disk used:** 2.88 MB
184
+
185
+ An example of 'train' looks as follows:
186
+ ```
187
+ {'id': '7-980',
188
+ 'question_stem': 'The sun is responsible for',
189
+ 'choices': {'text': ['puppies learning new tricks',
190
+ 'children growing up and getting old',
191
+ 'flowers wilting in a vase',
192
+ 'plants sprouting, blooming and wilting'],
193
+ 'label': ['A', 'B', 'C', 'D']},
194
+ 'answerKey': 'D',
195
+ 'fact1': 'the sun is the source of energy for physical cycles on Earth',
196
+ 'humanScore': 1.0,
197
+ 'clarity': 2.0,
198
+ 'turkIdAnonymized': 'b356d338b7'}
199
+ ```
200
+
201
+ ### Data Fields
202
+
203
+ The data fields are the same among all splits.
204
+
205
+ #### main
206
+ - `id`: a `string` feature.
207
+ - `question_stem`: a `string` feature.
208
+ - `choices`: a dictionary feature containing:
209
+ - `text`: a `string` feature.
210
+ - `label`: a `string` feature.
211
+ - `answerKey`: a `string` feature.
212
+
213
+ #### additional
214
+ - `id`: a `string` feature.
215
+ - `question_stem`: a `string` feature.
216
+ - `choices`: a dictionary feature containing:
217
+ - `text`: a `string` feature.
218
+ - `label`: a `string` feature.
219
+ - `answerKey`: a `string` feature.
220
+ - `fact1` (`str`): oOriginating common knowledge core fact associated to the question.
221
+ - `humanScore` (`float`): Human accuracy score.
222
+ - `clarity` (`float`): Clarity score.
223
+ - `turkIdAnonymized` (`str`): Anonymized crowd-worker ID.
224
+
225
+ ### Data Splits
226
+
227
+ | name | train | validation | test |
228
+ |------------|------:|-----------:|-----:|
229
+ | main | 4957 | 500 | 500 |
230
+ | additional | 4957 | 500 | 500 |
231
+
232
+ ## Dataset Creation
233
+
234
+ ### Curation Rationale
235
+
236
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
237
+
238
+ ### Source Data
239
+
240
+ #### Initial Data Collection and Normalization
241
+
242
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
243
+
244
+ #### Who are the source language producers?
245
+
246
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
247
+
248
+ ### Annotations
249
+
250
+ #### Annotation process
251
+
252
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
253
+
254
+ #### Who are the annotators?
255
+
256
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
257
+
258
+ ### Personal and Sensitive Information
259
+
260
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
261
+
262
+ ## Considerations for Using the Data
263
+
264
+ ### Social Impact of Dataset
265
+
266
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
267
+
268
+ ### Discussion of Biases
269
+
270
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
271
+
272
+ ### Other Known Limitations
273
+
274
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
275
+
276
+ ## Additional Information
277
+
278
+ ### Dataset Curators
279
+
280
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
281
+
282
+ ### Licensing Information
283
+
284
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
285
+
286
+ ### Citation Information
287
+
288
+ ```
289
+ @inproceedings{OpenBookQA2018,
290
+ title={Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering},
291
+ author={Todor Mihaylov and Peter Clark and Tushar Khot and Ashish Sabharwal},
292
+ booktitle={EMNLP},
293
+ year={2018}
294
+ }
295
+
296
+ ```
297
+
298
+
299
+ ### Contributions
300
+
301
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.