derek-thomas commited on
Commit
1361526
·
1 Parent(s): 29afe55

Updating README.md

Browse files
Files changed (1) hide show
  1. README.md +268 -7
README.md CHANGED
@@ -1,4 +1,54 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: image
@@ -29,17 +79,228 @@ dataset_info:
29
  dtype: string
30
  splits:
31
  - name: train
32
- num_bytes: 431579159.804
33
  num_examples: 12726
34
  - name: validation
35
- num_bytes: 144039504.812
36
  num_examples: 4241
37
  - name: test
38
- num_bytes: 152149151.301
39
  num_examples: 4241
40
- download_size: 626082086
41
- dataset_size: 727767815.917
42
  ---
43
- # Dataset Card for "ScienceQA"
44
 
45
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-sa-4.0
3
+ annotations_creators:
4
+ - expert-generated
5
+ - found
6
+ language:
7
+ - en
8
+ language_creators:
9
+ - expert-generated
10
+ - found
11
+ multilinguality:
12
+ - monolingual
13
+ paperswithcode_id: science-question-answering
14
+ pretty_name: ScienceQA
15
+ size_categories:
16
+ - 10K<n<100K
17
+ source_datasets:
18
+ - original
19
+ tags:
20
+ - multi-modal-qa
21
+ - science
22
+ - chemistry
23
+ - biology
24
+ - physics
25
+ - earth-science
26
+ - engineering
27
+ - geography
28
+ - history
29
+ - world-history
30
+ - civics
31
+ - economics
32
+ - global-studies
33
+ - grammar
34
+ - writing
35
+ - vocabulary
36
+ - natural-science
37
+ - language-science
38
+ - social-science
39
+ task_categories:
40
+ - multiple-choice
41
+ - image-classification
42
+ - question-answering
43
+ - other
44
+ - visual-question-answering
45
+ - text-classification
46
+ task_ids:
47
+ - multiple-choice-qa
48
+ - multi-class-image-classification
49
+ - closed-domain-qa
50
+ - visual-question-answering
51
+ - multi-class-classification
52
  dataset_info:
53
  features:
54
  - name: image
 
79
  dtype: string
80
  splits:
81
  - name: train
82
+ num_bytes: 16416902
83
  num_examples: 12726
84
  - name: validation
85
+ num_bytes: 5404896
86
  num_examples: 4241
87
  - name: test
88
+ num_bytes: 5441676
89
  num_examples: 4241
90
+ download_size: 0
91
+ dataset_size: 27263474
92
  ---
 
93
 
94
+ # Dataset Card Creation Guide
95
+
96
+ ## Table of Contents
97
+ - [Dataset Card Creation Guide](#dataset-card-creation-guide)
98
+ - [Table of Contents](#table-of-contents)
99
+ - [Dataset Description](#dataset-description)
100
+ - [Dataset Summary](#dataset-summary)
101
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
102
+ - [Languages](#languages)
103
+ - [Dataset Structure](#dataset-structure)
104
+ - [Data Instances](#data-instances)
105
+ - [Data Fields](#data-fields)
106
+ - [Data Splits](#data-splits)
107
+ - [Dataset Creation](#dataset-creation)
108
+ - [Curation Rationale](#curation-rationale)
109
+ - [Source Data](#source-data)
110
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
111
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
112
+ - [Annotations](#annotations)
113
+ - [Annotation process](#annotation-process)
114
+ - [Who are the annotators?](#who-are-the-annotators)
115
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
116
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
117
+ - [Social Impact of Dataset](#social-impact-of-dataset)
118
+ - [Discussion of Biases](#discussion-of-biases)
119
+ - [Other Known Limitations](#other-known-limitations)
120
+ - [Additional Information](#additional-information)
121
+ - [Dataset Curators](#dataset-curators)
122
+ - [Licensing Information](#licensing-information)
123
+ - [Citation Information](#citation-information)
124
+ - [Contributions](#contributions)
125
+
126
+ ## Dataset Description
127
+
128
+ - **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]()
129
+ - **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]()
130
+ - **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()
131
+ - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
132
+ - **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]()
133
+
134
+ ### Dataset Summary
135
+
136
+ Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
137
+
138
+ ### Supported Tasks and Leaderboards
139
+
140
+ Multi-modal Multiple Choice
141
+
142
+ ### Languages
143
+
144
+ English
145
+
146
+ ## Dataset Structure
147
+
148
+ ### Data Instances
149
+
150
+ Explore more samples [here](https://scienceqa.github.io/explore.html).
151
+
152
+ ``` json
153
+ {'image': Image,
154
+ 'question': 'Which of these states is farthest north?',
155
+ 'choices': ['West Virginia', 'Louisiana', 'Arizona', 'Oklahoma'],
156
+ 'answer': 0,
157
+ 'hint': '',
158
+ 'task': 'closed choice',
159
+ 'grade': 'grade2',
160
+ 'subject': 'social science',
161
+ 'topic': 'geography',
162
+ 'category': 'Geography',
163
+ 'skill': 'Read a map: cardinal directions',
164
+ 'lecture': 'Maps have four cardinal directions, or main directions. Those directions are north, south, east, and west.\nA compass rose is a set of arrows that point to the cardinal directions. A compass rose usually shows only the first letter of each cardinal direction.\nThe north arrow points to the North Pole. On most maps, north is at the top of the map.',
165
+ 'solution': 'To find the answer, look at the compass rose. Look at which way the north arrow is pointing. West Virginia is farthest north.'}
166
+ ```
167
+
168
+ Some records might be missing any or all of image, lecture, solution.
169
+
170
+ ### Data Fields
171
+
172
+ List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
173
+
174
+ - `example_field`: description of `example_field`
175
+
176
+ - `image` : Contextual image
177
+ - `question` : Prompt relating to the `lecture`
178
+ - `choices` : Multiple choice answer with 1 correct to the `question`
179
+ - `answer` : Index of choices corresponding to the correct answer
180
+ - `hint` : Hint to help answer the `question`
181
+ - `task` : Task description
182
+ - `grade` : Grade level from K-12
183
+ - `subject` : High level
184
+ - `topic` : natural-sciences, social-science, or language-science
185
+ - `category` : A subcategory of `topic`
186
+ - `skill` : A description of the task required
187
+ - `lecture` : A relevant lecture that a `question` is generated from
188
+ - `solution` : Instructions on how to solve the `question`
189
+
190
+
191
+ Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [Datasets Tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
192
+
193
+ ### Data Splits
194
+ - name: train
195
+ - num_bytes: 16416902
196
+ - num_examples: 12726
197
+ - name: validation
198
+ - num_bytes: 5404896
199
+ - num_examples: 4241
200
+ - name: test
201
+ - num_bytes: 5441676
202
+ - num_examples: 4241
203
+
204
+ ## Dataset Creation
205
+
206
+ ### Curation Rationale
207
+
208
+ When answering a question, humans utilize the information available across different modalities to synthesize a consistent and complete chain of thought (CoT). This process is normally a black box in the case of deep learning models like large-scale language models. Recently, science question benchmarks have been used to diagnose the multi-hop reasoning ability and interpretability of an AI system. However, existing datasets fail to provide annotations for the answers, or are restricted to the textual-only modality, small scales, and limited domain diversity. To this end, we present Science Question Answering (ScienceQA).
209
+
210
+ ### Source Data
211
+
212
+ SCIENCEQA is collected from elementary and high school science curricula.
213
+
214
+ #### Initial Data Collection and Normalization
215
+
216
+ See Below
217
+
218
+ #### Who are the source language producers?
219
+
220
+ See Below
221
+
222
+ ### Annotations
223
+
224
+ Questions in the SCIENCEQA dataset are sourced from open resources managed by IXL Learning,
225
+ an online learning platform curated by experts in the field of K-12 education. The dataset includes
226
+ problems that align with California Common Core Content Standards. To construct SCIENCEQA, we
227
+ downloaded the original science problems and then extracted individual components (e.g. questions,
228
+ hints, images, options, answers, lectures, and solutions) from them based on heuristic rules.
229
+ We manually removed invalid questions, such as questions that have only one choice, questions that
230
+ contain faulty data, and questions that are duplicated, to comply with fair use and transformative
231
+ use of the law. If there were multiple correct answers that applied, we kept only one correct answer.
232
+ Also, we shuffled the answer options of each question to ensure the choices do not follow any
233
+ specific pattern. To make the dataset easy to use, we then used semi-automated scripts to reformat
234
+ the lectures and solutions. Therefore, special structures in the texts, such as tables and lists, are
235
+ easily distinguishable from simple text passages. Similar to ImageNet, ReClor, and PMR datasets,
236
+ SCIENCEQA is available for non-commercial research purposes only and the copyright belongs to
237
+ the original authors. To ensure data quality, we developed a data exploration tool to review examples
238
+ in the collected dataset, and incorrect annotations were further manually revised by experts. The tool
239
+ can be accessed at https://scienceqa.github.io/explore.html.
240
+
241
+ #### Annotation process
242
+
243
+ See above
244
+
245
+ #### Who are the annotators?
246
+
247
+ See above
248
+
249
+ ### Personal and Sensitive Information
250
+
251
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
252
+
253
+ ## Considerations for Using the Data
254
+
255
+ ### Social Impact of Dataset
256
+
257
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
258
+
259
+ ### Discussion of Biases
260
+
261
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
262
+
263
+ ### Other Known Limitations
264
+
265
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
266
+
267
+ ## Additional Information
268
+
269
+ ### Dataset Curators
270
+
271
+ - Pan Lu1,3
272
+ - Swaroop Mishra2,3
273
+ - Tony Xia1
274
+ - Liang Qiu1
275
+ - Kai-Wei Chang1
276
+ - Song-Chun Zhu1
277
+ - Oyvind Tafjord3
278
+ - Peter Clark3
279
+ - Ashwin Kalyan3
280
+
281
+ From:
282
+ 1. University of California, Los Angeles
283
+ 2. Arizona State University
284
+ 3. Allen Institute for AI
285
+
286
+
287
+
288
+ ### Licensing Information
289
+
290
+ [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
291
+ ](https://creativecommons.org/licenses/by-nc-sa/4.0/)
292
+
293
+ ### Citation Information
294
+
295
+ Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
296
+ ```
297
+ @inproceedings{lu2022learn,
298
+ title={Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering},
299
+ author={Lu, Pan and Mishra, Swaroop and Xia, Tony and Qiu, Liang and Chang, Kai-Wei and Zhu, Song-Chun and Tafjord, Oyvind and Clark, Peter and Ashwin Kalyan},
300
+ booktitle={The 36th Conference on Neural Information Processing Systems (NeurIPS)},
301
+ year={2022}
302
+ }
303
+ ```
304
+ ### Contributions
305
+
306
+ Thanks to [Derek Thomas](https://huggingface.co/derek-thomas) [@datavistics](https://github.com/datavistics) for adding this dataset.