Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
hheiden-roots commited on
Commit
b6cb0d1
·
verified ·
1 Parent(s): bbf0936

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -14
README.md CHANGED
@@ -55,7 +55,7 @@ The TABME dataset is a synthetic collection created to simulate the digitization
55
 
56
  TABME++ replaces the previous OCR with commericial-quality OCR obtained through Microsoft's OCR services.
57
 
58
- - **Curated by:** Roots Automation
59
  - **Language(s) (NLP):** English
60
  - **License:** MIT
61
 
@@ -68,25 +68,21 @@ TABME++ replaces the previous OCR with commericial-quality OCR obtained through
68
 
69
  ## Uses
70
 
71
- <!-- Address questions around how the dataset is intended to be used. -->
72
-
73
  ### Direct Use
74
 
75
- <!-- This section describes suitable use cases for the dataset. -->
76
-
77
- [More Information Needed]
78
-
79
- ### Out-of-Scope Use
80
-
81
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
82
-
83
- [More Information Needed]
84
 
85
  ## Dataset Structure
86
 
87
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
 
 
 
 
 
88
 
89
- [More Information Needed]
 
90
 
91
  ## Dataset Creation
92
 
 
55
 
56
  TABME++ replaces the previous OCR with commericial-quality OCR obtained through Microsoft's OCR services.
57
 
58
+ - **Curated by:** UCSF, UCL, University of Cambridge, Vector.ai, Roots Automation
59
  - **Language(s) (NLP):** English
60
  - **License:** MIT
61
 
 
68
 
69
  ## Uses
70
 
 
 
71
  ### Direct Use
72
 
73
+ This dataset is intended to be used for page stream segmentation: the segmentation of a stream of ordered pages into coherent atomic documents.
 
 
 
 
 
 
 
 
74
 
75
  ## Dataset Structure
76
 
77
+ Each row of the dataset corresponds to one page of one document.
78
+ Each page has the following features:
79
+ - `doc_id`, str: The unique document id this page belongs to
80
+ - `pg_id`, int: The page id within its document
81
+ - `ocr`, str: A string containing the OCR annotations from Microsoft OCR. These can be loaded as a Python dictionary with `json.loads` (or equivalent).
82
+ - `img`, binary: The raw bytes of the page image. These can be converted back to a PIL.Image with `Image.open(io.BytesIO(bytes))` (or equivalent).
83
 
84
+ This dataset is given such that each document appears once.
85
+ To build out the full aggregated synthetic streams, one needs to collate the unique documents according to the streams described in the [streams sub-folder](streams/).
86
 
87
  ## Dataset Creation
88