Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
Molbap HF staff commited on
Commit
2f4e780
1 Parent(s): 8fc9981

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -7
README.md CHANGED
@@ -93,16 +93,14 @@ Here is an example using a custom collator to bundle batches in a trainable way
93
 
94
  from datasets import load_dataset
95
 
96
-
97
-
98
- docvqa_dataset = load_dataset("pixparse/docvqa-single-page", split="train"
99
  )
100
-
101
- collator_class = Collator()
102
- loader = DataLoader(docvqa_dataset, batch_size=8, collate_fn=collator_class.collate_fn)
103
  ```
 
104
 
105
- The loader can then be iterated on normally and yields image + question and answer samples.
106
 
107
  ### Data Splits
108
 
 
93
 
94
  from datasets import load_dataset
95
 
96
+ docvqa_dataset = load_dataset("pixparse/docvqa-single-page-questions", split="train"
 
 
97
  )
98
+ next(iter(dataset["train"])).keys()
99
+ >>> dict_keys(['image', 'question_id', 'question', 'answers', 'data_split', 'ocr_results', 'other_metadata'])
 
100
  ```
101
+ `image` will be a byte string containing the image contents. `answers` is a list of possible answers, aligned with the expected inputs to the [ANLS metric](https://arxiv.org/abs/1905.13648).
102
 
103
+ The loader can then be iterated on normally and yields questions. Many questions rely on the same image, so there is some amount of data duplication.
104
 
105
  ### Data Splits
106