Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
Molbap's picture
Molbap HF staff
Create README.md
8fc9981 verified
|
raw
history blame
2.92 kB
---
dataset_info:
features:
- name: image
dtype: image
- name: question_id
dtype: int64
- name: question
dtype: string
- name: answers
sequence: string
- name: data_split
dtype: string
- name: ocr_results
struct:
- name: page
dtype: int64
- name: clockwise_orientation
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: unit
dtype: string
- name: lines
list:
- name: bounding_box
sequence: int64
- name: text
dtype: string
- name: words
list:
- name: bounding_box
sequence: int64
- name: text
dtype: string
- name: confidence
dtype: string
- name: other_metadata
struct:
- name: ucsf_document_id
dtype: string
- name: ucsf_document_page_no
dtype: string
- name: doc_id
dtype: int64
- name: image
dtype: string
splits:
- name: train
num_examples: 39463
- name: validation
num_examples: 5349
- name: test
num_examples: 5188
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
license: mit
task_categories:
- question-answering
language:
- en
pretty_name: d
size_categories:
- 10K<n<100K
---
# Dataset Card for DocVQA Dataset
## Dataset Description
- **Point of Contact from curators:** [Minesh Mathew](mailto:[email protected]), [Dimosthenis Karatzas]([email protected]), [C. V. Jawahar]([email protected])
- **Point of Contact Hugging Face:** [Pablo Montalvo](mailto:[email protected])
### Dataset Summary
DocVQA dataset is a document dataset introduced in Mathew et al. (2021) consisting of 50,000 questions defined on 12,000+ document images.
### Usage
This dataset can be used with current releases of Hugging Face `datasets` library.
Here is an example using a custom collator to bundle batches in a trainable way on the `train` split
```python
from datasets import load_dataset
docvqa_dataset = load_dataset("pixparse/docvqa-single-page", split="train"
)
collator_class = Collator()
loader = DataLoader(docvqa_dataset, batch_size=8, collate_fn=collator_class.collate_fn)
```
The loader can then be iterated on normally and yields image + question and answer samples.
### Data Splits
#### Train
* 10194 images, 39463 questions and answers.
### Validation
* 1286 images, 5349 questions and answers.
### Test
* 1,287 images, 5,188 questions.
## Additional Information
### Dataset Curators
Pablo Montalvo, Ross Wightman
### Licensing Information
MIT
### Citation Information
Mathew, Minesh, Dimosthenis Karatzas, and C. V. Jawahar. "Docvqa: A dataset for vqa on document images." Proceedings of the IEEE/CVF winter conference on applications of computer vision. 20