--- dataset_info: features: - name: image dtype: image - name: question_id dtype: int64 - name: question dtype: string - name: answers sequence: string - name: data_split dtype: string - name: ocr_results struct: - name: page dtype: int64 - name: clockwise_orientation dtype: float64 - name: width dtype: int64 - name: height dtype: int64 - name: unit dtype: string - name: lines list: - name: bounding_box sequence: int64 - name: text dtype: string - name: words list: - name: bounding_box sequence: int64 - name: text dtype: string - name: confidence dtype: string - name: other_metadata struct: - name: ucsf_document_id dtype: string - name: ucsf_document_page_no dtype: string - name: doc_id dtype: int64 - name: image dtype: string splits: - name: train num_examples: 39463 - name: validation num_examples: 5349 - name: test num_examples: 5188 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* license: mit task_categories: - question-answering language: - en size_categories: - 10K>> dict_keys(['image', 'question_id', 'question', 'answers', 'data_split', 'ocr_results', 'other_metadata']) ``` `image` will be a byte string containing the image contents. `answers` is a list of possible answers, aligned with the expected inputs to the [ANLS metric](https://arxiv.org/abs/1905.13648). The loader can then be iterated on normally and yields questions. Many questions rely on the same image, so there is some amount of data duplication. ### Data Splits #### Train * 10194 images, 39463 questions and answers. ### Validation * 1286 images, 5349 questions and answers. ### Test * 1,287 images, 5,188 questions. ## Additional Information ### Dataset Curators Pablo Montalvo, Ross Wightman ### Licensing Information MIT ### Citation Information Mathew, Minesh, Dimosthenis Karatzas, and C. V. Jawahar. "Docvqa: A dataset for vqa on document images." Proceedings of the IEEE/CVF winter conference on applications of computer vision. 20