Update README.md
Browse files
README.md
CHANGED
@@ -31,13 +31,10 @@ table recognition dataset.
|
|
31 |
The dataset contains 16,887 Wikipedia screenshot, which are segmented to 54,032 subpages since the full screenshots are potentially long. In total, there's 159,905 tables in the dataset. The number of question-answer samples is 70,652. Each QA sample contains triplets of <question, answer, full-page screenshot filename>, and is additionally annotated with retrieval labels (which subpage, and which table). 53,698 QA samples also have SQL annotation.
|
32 |
|
33 |
For each subpage, OCR and table extraction annotations from two sources are available. While rendering the screenshots, the ground truth table annotation is recorded. Meanwhile, to make the dataset realistic, we also requested OCR and table extraction from [Amazon Textract](https://aws.amazon.com/textract/) for each subpage (results obtained during Feb.28, 2023 - Mar.6, 2023).
|
34 |
-
### Supported Tasks and Leaderboards
|
35 |
-
|
36 |
-
[More Information Needed]
|
37 |
|
38 |
### Languages
|
39 |
|
40 |
-
|
41 |
|
42 |
## Dataset Structure
|
43 |
|
@@ -206,64 +203,15 @@ Here is an example of an xml table bbox annotation from `WikiDT-dataset/WikiTabl
|
|
206 |
</object>
|
207 |
</annotation>
|
208 |
```
|
209 |
-
## Dataset Creation
|
210 |
-
|
211 |
-
### Curation Rationale
|
212 |
-
|
213 |
-
[More Information Needed]
|
214 |
-
|
215 |
-
### Source Data
|
216 |
-
|
217 |
-
#### Initial Data Collection and Normalization
|
218 |
-
|
219 |
-
[More Information Needed]
|
220 |
-
|
221 |
-
#### Who are the source language producers?
|
222 |
-
|
223 |
-
[More Information Needed]
|
224 |
-
|
225 |
-
### Annotations
|
226 |
-
|
227 |
-
#### Annotation process
|
228 |
-
|
229 |
-
[More Information Needed]
|
230 |
-
|
231 |
-
#### Who are the annotators?
|
232 |
-
|
233 |
-
[More Information Needed]
|
234 |
-
|
235 |
-
### Personal and Sensitive Information
|
236 |
-
|
237 |
-
[More Information Needed]
|
238 |
-
|
239 |
-
## Considerations for Using the Data
|
240 |
-
|
241 |
-
### Social Impact of Dataset
|
242 |
-
|
243 |
-
[More Information Needed]
|
244 |
-
|
245 |
-
### Discussion of Biases
|
246 |
-
|
247 |
-
[More Information Needed]
|
248 |
-
|
249 |
-
### Other Known Limitations
|
250 |
-
|
251 |
-
[More Information Needed]
|
252 |
-
|
253 |
-
## Additional Information
|
254 |
-
|
255 |
-
### Dataset Curators
|
256 |
-
|
257 |
-
[More Information Needed]
|
258 |
|
259 |
### Licensing Information
|
260 |
|
261 |
-
|
262 |
|
263 |
-
###
|
264 |
|
265 |
-
[
|
266 |
|
267 |
-
|
268 |
|
269 |
-
[
|
|
|
31 |
The dataset contains 16,887 Wikipedia screenshot, which are segmented to 54,032 subpages since the full screenshots are potentially long. In total, there's 159,905 tables in the dataset. The number of question-answer samples is 70,652. Each QA sample contains triplets of <question, answer, full-page screenshot filename>, and is additionally annotated with retrieval labels (which subpage, and which table). 53,698 QA samples also have SQL annotation.
|
32 |
|
33 |
For each subpage, OCR and table extraction annotations from two sources are available. While rendering the screenshots, the ground truth table annotation is recorded. Meanwhile, to make the dataset realistic, we also requested OCR and table extraction from [Amazon Textract](https://aws.amazon.com/textract/) for each subpage (results obtained during Feb.28, 2023 - Mar.6, 2023).
|
|
|
|
|
|
|
34 |
|
35 |
### Languages
|
36 |
|
37 |
+
English
|
38 |
|
39 |
## Dataset Structure
|
40 |
|
|
|
203 |
</object>
|
204 |
</annotation>
|
205 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
206 |
|
207 |
### Licensing Information
|
208 |
|
209 |
+
CC BY SA 3.0
|
210 |
|
211 |
+
### Contributions
|
212 |
|
213 |
+
[Hui Shi](mailto:[email protected]) (Work done during her internship at Amazon)
|
214 |
|
215 |
+
[Yusheng Xie](mailto:[email protected]) (corresponding person)
|
216 |
|
217 |
+
[Luis Goncalves](mailto:[email protected])
|