Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 2,916 Bytes
8fc9981
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: question_id
    dtype: int64
  - name: question
    dtype: string
  - name: answers
    sequence: string
  - name: data_split
    dtype: string
  - name: ocr_results
    struct:
    - name: page
      dtype: int64
    - name: clockwise_orientation
      dtype: float64
    - name: width
      dtype: int64
    - name: height
      dtype: int64
    - name: unit
      dtype: string
    - name: lines
      list:
      - name: bounding_box
        sequence: int64
      - name: text
        dtype: string
      - name: words
        list:
        - name: bounding_box
          sequence: int64
        - name: text
          dtype: string
        - name: confidence
          dtype: string
  - name: other_metadata
    struct:
    - name: ucsf_document_id
      dtype: string
    - name: ucsf_document_page_no
      dtype: string
    - name: doc_id
      dtype: int64
    - name: image
      dtype: string
  splits:
  - name: train
    num_examples: 39463
  - name: validation
    num_examples: 5349
  - name: test
    num_examples: 5188
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
license: mit
task_categories:
- question-answering
language:
- en
pretty_name: d
size_categories:
- 10K<n<100K
---

# Dataset Card for DocVQA Dataset

## Dataset Description

- **Point of Contact from curators:** [Minesh Mathew](mailto:[email protected]), [Dimosthenis Karatzas]([email protected]), [C. V. Jawahar]([email protected])
- **Point of Contact Hugging Face:** [Pablo Montalvo](mailto:[email protected])

### Dataset Summary

DocVQA dataset is a document dataset introduced in Mathew et al. (2021) consisting of 50,000 questions defined on 12,000+ document images. 

### Usage

This dataset can be used with current releases of Hugging Face `datasets` library. 
Here is an example using a custom collator to bundle batches in a trainable way on the `train` split

```python

from datasets import load_dataset



docvqa_dataset = load_dataset("pixparse/docvqa-single-page", split="train"
)

collator_class = Collator()
loader = DataLoader(docvqa_dataset, batch_size=8, collate_fn=collator_class.collate_fn)
```

The loader can then be iterated on normally and yields image + question and answer samples. 

### Data Splits


#### Train
* 10194 images, 39463 questions and answers.

### Validation

* 1286 images, 5349 questions and answers.

### Test

* 1,287 images,  5,188 questions.
  
## Additional Information

### Dataset Curators

Pablo Montalvo, Ross Wightman

### Licensing Information

MIT

### Citation Information

Mathew, Minesh, Dimosthenis Karatzas, and C. V. Jawahar. "Docvqa: A dataset for vqa on document images." Proceedings of the IEEE/CVF winter conference on applications of computer vision. 20