Johnnes Bayer commited on
Commit
c900970
·
1 Parent(s): ace9ea6

Overhauled Version of Consistency Script, improved README, and Loader

Browse files
Files changed (4) hide show
  1. README.md +60 -33
  2. classes_color.json +1 -0
  3. consistency.py +115 -31
  4. loader.py +32 -7
README.md CHANGED
@@ -11,11 +11,14 @@ language:
11
  - de
12
  ---
13
 
14
- # Public Ground-Truth Dataset for Handwritten Circuit Diagrams (GTDB-HD)
15
- This repository contains images of hand-drawn electrical circuit diagrams as well as accompanying bounding box annotation for object detection as well as segmentation ground truth files. This dataset is intended to train (e.g. neural network) models for the purpose of the extraction of electrical graphs from raster graphics.
 
 
 
16
 
17
  ## Structure
18
- The folder structure is made up as follows:
19
 
20
  ```
21
  gtdh-hd
@@ -25,12 +28,13 @@ gtdh-hd
25
  │ classes_discontinuous.json # Classes Morphology Info
26
  │ classes_ports.json # Electrical Port Descriptions for Classes
27
  │ consistency.py # Dataset Statistics and Consistency Check
28
- | loader.py # Simple Dataset Loader and Storage Functions
29
  │ segmentation.py # Multiclass Segmentation Generation
30
  │ utils.py # Helper Functions
31
  │ requirements.txt # Requirements for Scripts
 
32
  └───drafter_D
33
- │ └───annotations # Bounding Box Annotations
34
  │ │ │ CX_DY_PZ.xml
35
  │ │ │ ...
36
  │ │
@@ -55,13 +59,15 @@ Where:
55
  - `Y` is the Local Number of the Circuit's Drawings (2 Drawings per Circuit)
56
  - `Z` is the Local Number of the Drawing's Image (4 Pictures per Drawing)
57
 
58
- ### Image Files
59
- Every image is RGB-colored and either stored as `jpg`, `jpeg` or `png` (both uppercase and lowercase suffixes exist).
60
 
61
  ### Bounding Box Annotations
62
- A complete list of class labels including a suggested mapping table to integer numbers for training and prediction purposes can be found in `classes.json`. The annotations contains **BB**s (Bounding Boxes) of **RoI**s (Regions of Interest) like electrical symbols or texts within the raw images and are stored in the [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/) format.
 
 
63
 
64
- Please note: *For every Raw image in the dataset, there is an accompanying bounding box annotation file.*
65
 
66
  #### Known Labeled Issues
67
  - C25_D1_P4 cuts off a text
@@ -71,36 +77,52 @@ Please note: *For every Raw image in the dataset, there is an accompanying bound
71
  - C33_D1_P4 has a text less
72
  - C46_D2_P2 cuts of a text
73
 
74
- ### Instance Segmentation
75
- For every binary segmentation map, there is an accompanying polygonal annotation file for instance segmentation purposes, which is stored in the [labelme](https://github.com/wkentaro/labelme) format. Note that the contained polygons are quite coarse, intended to be used in conjunction with the binary segmentation maps for connection extraction and to tell individual instances with overlapping BBs apart.
76
 
77
- ### Segmentation Maps
78
- Binary Segmentation images are available for some samples and bear the same resolution as the respective image files. They are considered to contain only black and white pixels indicating areas of drawings strokes and background respectively.
79
 
80
  ### Netlists
81
  For some images, there are also netlist files available, which are stored in the [ASC](http://ltwiki.org/LTspiceHelp/LTspiceHelp/Spice_Netlist.htm) format.
82
 
83
- ### Consistency and Statistics
84
- This repository comes with a stand-alone script to:
85
 
86
- - Obtain Statistics on
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87
  - Class Distribution
88
  - BB Sizes
89
- - Check the BB Consistency
90
- - Classes with Regards to the `classes.json`
91
- - Counts between Pictures of the same Drawing
92
- - Ensure a uniform writing style of the Annotation Files (indent)
93
 
94
  The respective script is called without arguments to operate on the **entire** dataset:
95
 
96
  ```
97
- $ python3 consistency.py
 
 
 
 
 
 
98
  ```
99
 
100
- Note that due to a complete re-write of the annotation data, the script takes several seconds to finish. A drafter can be specified as CLI argument to restrict the evaluation (for example drafter 15):
101
 
102
  ```
103
- $ python3 consistency.py 15
104
  ```
105
 
106
  ### Multi-Class (Instance) Segmentation Processing
@@ -166,6 +188,7 @@ db = read_images(drafter=12) # Returns a list of (Image, Annotation) pairs
166
  db = read_snippets(drafter=12) # Returns a list of (Image, Annotation) pairs
167
  ```
168
 
 
169
  ## Citation
170
  If you use this dataset for scientific publications, please consider citing us as follows:
171
 
@@ -180,8 +203,10 @@ If you use this dataset for scientific publications, please consider citing us a
180
  }
181
  ```
182
 
 
183
  ## How to Contribute
184
- If you want to contribute to the dataset as a drafter or in case of any further questions, please send an email to: <johannes.bayer@dfki.de> (corresponding author), <[email protected]>, <[email protected]>
 
185
 
186
  ## Guidelines
187
  These guidelines are used throughout the generation of the dataset. They can be used as an instruction for participants and data providers.
@@ -190,6 +215,8 @@ These guidelines are used throughout the generation of the dataset. They can be
190
  - 12 Circuits should be drawn, each of them twice (24 drawings in total)
191
  - Most important: The drawing should be as natural to the drafter as possible
192
  - Free-Hand sketches are preferred, using rulers and drawing Template stencils should be avoided unless it appears unnatural to the drafter
 
 
193
  - Different types of pens/pencils should be used for different drawings
194
  - Different kinds of (colored, structured, ruled, lined) paper should be used
195
  - One symbol set (European/American) should be used throughout one drawing (consistency)
@@ -201,7 +228,7 @@ These guidelines are used throughout the generation of the dataset. They can be
201
  - Angle should vary
202
  - Lighting should vary
203
  - Moderate (e.g. motion) blur is allowed
204
- - All circuit-related aspects of the drawing must be _human-recognicable_
205
  - The drawing should be the main part of the image, but _naturally_ occurring objects from the environment are welcomed
206
  - The first image should be _clean_, i.e. ideal capturing conditions
207
  - Kinks and Buckling can be applied to the drawing between individual image capturing
@@ -214,11 +241,11 @@ These guidelines are used throughout the generation of the dataset. They can be
214
  - General Placement
215
  - A **RoI** must be **completely** surrounded by its **BB**
216
  - A **BB** should be as tight as possible to the **RoI**
217
- - In case of connecting lines not completely touching the symbol, the BB should extended (only by a small margin) to enclose those gaps (epecially considering junctions)
218
  - Characters that are part of the **essential symbol definition** should be included in the BB (e.g. the `+` of a polarized capacitor should be included in its BB)
219
  - **Junction** annotations
220
  - Used for actual junction points (Connection of three or more wire segments with a small solid circle)
221
- - Used for connection of three or more sraight line wire segements where a physical connection can be inferred by context (i.e. can be distinuished from **crossover**)
222
  - Used for wire line corners
223
  - Redundant Junction Points should **not** be annotated (small solid circle in the middle of a straight line segment)
224
  - Should not be used for corners or junctions that are part of the symbol definition (e.g. Transistors)
@@ -241,7 +268,7 @@ These guidelines are used throughout the generation of the dataset. They can be
241
  - Only add terminal text annotation if the terminal is not part of the essential symbol definition
242
  - **Table** cells should be annotated independently
243
  - **Operation Amplifiers**
244
- - Both the triangular US symbols and the european IC-like symbols symbols for OpAmps should be labeled `operational_amplifier`
245
  - The `+` and `-` signs at the OpAmp's input terminals are considered essential and should therefore not be annotated as texts
246
  - **Complex Components**
247
  - Both the entire Component and its sub-Components and internal connections should be annotated:
@@ -263,7 +290,7 @@ These guidelines are used throughout the generation of the dataset. They can be
263
 
264
 
265
  #### Rotation Annotations
266
- The Rotation (integer in degree) should capture the overall rotation of the symbol shape. However, the position of the terminals should also be taked into consideration. Under idealized circumstances (no perspective distorion and accurately drawn symbols according to the symbol library), these two requirements equal each other. For pathological cases however, in which shape and the set of terminals (or even individual terminals) are conflicting, the rotation should compromise between all factors.
267
 
268
  Rotation annotations are currently work in progress. They should be provided for at least the following classes:
269
  - "voltage.dc"
@@ -273,8 +300,8 @@ Rotation annotations are currently work in progress. They should be provided for
273
  - "transistor.bjt"
274
 
275
  #### Text Annotations
276
- - The Character Sequence in the Text Label Annotations should describe the actual Characters depicted in the respective Bounding Box as Precisely as Possible
277
- - Bounding Box Annotations of class `text`
278
  - Bear an additional `<text>` tag in which their content is given as string
279
  - The `Omega` and `Mikro` Symbols are escaped respectively
280
  - Currently Work in Progress
@@ -307,6 +334,6 @@ Rotation annotations are currently work in progress. They should be provided for
307
  labelme --labels "connector" --config "{shift_auto_shape_color: 1}" --nodata
308
  ```
309
 
310
- ## Licence
311
- The entire content of this repository, including all image files, annotation files as well as has sourcecode, metadata and documentation has been published under the [Creative Commons Attribution Share Alike Licence 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
312
 
 
 
 
11
  - de
12
  ---
13
 
14
+
15
+
16
+ # A Public Ground-Truth Dataset for Handwritten Circuit Diagrams (CGHD)
17
+ This repository contains images of hand-drawn electrical circuit diagrams as well as accompanying bounding box annotation, polygon annotation and segmentation files. These annotations serve as ground truth to train and evaluate several image processing tasks like object detection, instance segmentation and text detection. The purpose of this dataset is to facilitate the automated extraction of electrical graph structures from raster graphics.
18
+
19
 
20
  ## Structure
21
+ The folder and file structure is made up as follows:
22
 
23
  ```
24
  gtdh-hd
 
28
  │ classes_discontinuous.json # Classes Morphology Info
29
  │ classes_ports.json # Electrical Port Descriptions for Classes
30
  │ consistency.py # Dataset Statistics and Consistency Check
31
+ loader.py # Simple Dataset Loader and Storage Functions
32
  │ segmentation.py # Multiclass Segmentation Generation
33
  │ utils.py # Helper Functions
34
  │ requirements.txt # Requirements for Scripts
35
+
36
  └───drafter_D
37
+ │ └───annotations # Bounding Box, Rotation and Text Label Annotations
38
  │ │ │ CX_DY_PZ.xml
39
  │ │ │ ...
40
  │ │
 
59
  - `Y` is the Local Number of the Circuit's Drawings (2 Drawings per Circuit)
60
  - `Z` is the Local Number of the Drawing's Image (4 Pictures per Drawing)
61
 
62
+ ### Raw Image Files
63
+ Every raw image is RGB-colored and either stored as `jpg`, `jpeg` or `png` (both uppercase and lowercase suffixes exist). Raw images are always stored in sub-folders named `images`.
64
 
65
  ### Bounding Box Annotations
66
+ For every raw image in the dataset, there is an annotation file which contains **BB**s (Bounding Boxes) of **RoI**s (Regions of Interest) like electrical symbols or texts within that image. These BB annotations are stored in the [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/) format. Apart from its location in the image, every BB bears a class label, a complete list of class labels including a suggested mapping table to integer numbers for training and prediction purposes can be found in `classes.json`. As the bb annotations are the most basic and pivotal element of this dataset, they are stored in sub-folders named `annotations`.
67
+
68
+ Please Note: *For every Raw image in the dataset, there is an accompanying BB annotation file.*
69
 
70
+ Please Note: *The BB annotation files are also used to store symbol rotation and text label annotations as XML Tags that form an extension of the utilized PASCAL VOC format.*
71
 
72
  #### Known Labeled Issues
73
  - C25_D1_P4 cuts off a text
 
77
  - C33_D1_P4 has a text less
78
  - C46_D2_P2 cuts of a text
79
 
80
+ ### Binary Segmentation Maps
81
+ Binary segmentation images are available for some raw image samples and consequently bear the same resolutions as the respective raw images. The defined goal is to have a segmentation map for at least one of the images of every circuit. Binary segmentation maps are considered to contain black and white pixels only. More precisely, white pixels indicate any kind of background like paper (ruling), surrounding objects or hands and black pixels indicate areas of drawings strokes belonging to the circuit. As binary segmentation images are the only permanent type of segmentation map in this dataset, they are stored in sub-folders named `segmentation`.
82
 
83
+ ### Polygon Annotations
84
+ For every binary segmentation map, there is an accompanying polygonal annotation file for instance segmentation purposes (that's why the polygon annotations are referred to as `instances` and stored in sub-folders of this name), which is stored in the [labelme](https://github.com/wkentaro/labelme) format. Note that the contained polygons are quite coarse, intended to be used in conjunction with the binary segmentation maps for connection extraction and to tell individual instances with overlapping BBs apart.
85
 
86
  ### Netlists
87
  For some images, there are also netlist files available, which are stored in the [ASC](http://ltwiki.org/LTspiceHelp/LTspiceHelp/Spice_Netlist.htm) format.
88
 
 
 
89
 
90
+ ## Processing Scripts
91
+ This repository comes with several python scripts. These have been tested with [Python 3.11](https://docs.python.org/3.11/). Before running them, please make sure all requirements are met (see `requirements.txt`).
92
+
93
+ ### Consistency and Statistics
94
+ The consistency script performs data integrity checks and corrections as well as derives statistics for the dataset. The list of features include:
95
+
96
+ - Ensure annotation files are stored uniformly
97
+ - Same version of annotation file format being used
98
+ - Same indent, uniform line breaks between tags (important to use `git diff` effectively)
99
+ - Check Annotation Integrity
100
+ - Classes referenced the (BB/Polygon) Annotations are contained in the central `classes.json` list
101
+ - `text` Annotations actually contain a non-empty text label and text labels exist in `text` annotations only
102
+ - Class Count between Pictures of the same Drawing are identical
103
+ - Image Dimensions stated in the annotation files match the referenced images
104
+ - Obtain Statistics
105
  - Class Distribution
106
  - BB Sizes
107
+ - Image Size Distribustion
108
+ - Text Character Distribution
 
 
109
 
110
  The respective script is called without arguments to operate on the **entire** dataset:
111
 
112
  ```
113
+ python consistency.py
114
+ ```
115
+
116
+ Note that due to a complete re-write of the annotation data, the script takes several seconds to finish. Therefore, the script can be restricted to an individual drafter, specified as CLI argument (for example drafter 15):
117
+
118
+ ```
119
+ python consistency.py -d 15
120
  ```
121
 
122
+ In order to reduce the computational overhead and CLI prints, most functions are deactivated by default. In order to see the list of available options, run:
123
 
124
  ```
125
+ python consistency.py -h
126
  ```
127
 
128
  ### Multi-Class (Instance) Segmentation Processing
 
188
  db = read_snippets(drafter=12) # Returns a list of (Image, Annotation) pairs
189
  ```
190
 
191
+
192
  ## Citation
193
  If you use this dataset for scientific publications, please consider citing us as follows:
194
 
 
203
  }
204
  ```
205
 
206
+
207
  ## How to Contribute
208
+ If you want to contribute to the dataset as a drafter or in case of any further questions, please send an email to: <johannes.bayer@mail.de>
209
+
210
 
211
  ## Guidelines
212
  These guidelines are used throughout the generation of the dataset. They can be used as an instruction for participants and data providers.
 
215
  - 12 Circuits should be drawn, each of them twice (24 drawings in total)
216
  - Most important: The drawing should be as natural to the drafter as possible
217
  - Free-Hand sketches are preferred, using rulers and drawing Template stencils should be avoided unless it appears unnatural to the drafter
218
+ - The sketches should not be traced directly from a template (e.g. from the Original Printed Circuits)
219
+ - Minor alterations between the two drawings of a circuit (e.g. shifting a wire line) are encouraged within the circuit's layout as long as the circuit's function is preserved (only if the drafter is familiar with schematics)
220
  - Different types of pens/pencils should be used for different drawings
221
  - Different kinds of (colored, structured, ruled, lined) paper should be used
222
  - One symbol set (European/American) should be used throughout one drawing (consistency)
 
228
  - Angle should vary
229
  - Lighting should vary
230
  - Moderate (e.g. motion) blur is allowed
231
+ - All circuit-related aspects of the drawing must be _human-recognizable_
232
  - The drawing should be the main part of the image, but _naturally_ occurring objects from the environment are welcomed
233
  - The first image should be _clean_, i.e. ideal capturing conditions
234
  - Kinks and Buckling can be applied to the drawing between individual image capturing
 
241
  - General Placement
242
  - A **RoI** must be **completely** surrounded by its **BB**
243
  - A **BB** should be as tight as possible to the **RoI**
244
+ - In case of connecting lines not completely touching the symbol, the BB should be extended (only by a small margin) to enclose those gaps (especially considering junctions)
245
  - Characters that are part of the **essential symbol definition** should be included in the BB (e.g. the `+` of a polarized capacitor should be included in its BB)
246
  - **Junction** annotations
247
  - Used for actual junction points (Connection of three or more wire segments with a small solid circle)
248
+ - Used for connection of three or more straight line wire segments where a physical connection can be inferred by context (i.e. can be distinguished from **crossover**)
249
  - Used for wire line corners
250
  - Redundant Junction Points should **not** be annotated (small solid circle in the middle of a straight line segment)
251
  - Should not be used for corners or junctions that are part of the symbol definition (e.g. Transistors)
 
268
  - Only add terminal text annotation if the terminal is not part of the essential symbol definition
269
  - **Table** cells should be annotated independently
270
  - **Operation Amplifiers**
271
+ - Both the triangular US symbols and the european IC-like symbols for OpAmps should be labeled `operational_amplifier`
272
  - The `+` and `-` signs at the OpAmp's input terminals are considered essential and should therefore not be annotated as texts
273
  - **Complex Components**
274
  - Both the entire Component and its sub-Components and internal connections should be annotated:
 
290
 
291
 
292
  #### Rotation Annotations
293
+ The Rotation (integer in degree) should capture the overall rotation of the symbol shape. However, the position of the terminals should also be taken into consideration. Under idealized circumstances (no perspective distortion and accurately drawn symbols according to the symbol library), these two requirements equal each other. For pathological cases however, in which shape and the set of terminals (or even individual terminals) are conflicting, the rotation should compromise between all factors.
294
 
295
  Rotation annotations are currently work in progress. They should be provided for at least the following classes:
296
  - "voltage.dc"
 
300
  - "transistor.bjt"
301
 
302
  #### Text Annotations
303
+ - The Character Sequence in the Text Label Annotations should describe the actual Characters depicted in the respective BB as Precisely as Possible
304
+ - BB Annotations of class `text`
305
  - Bear an additional `<text>` tag in which their content is given as string
306
  - The `Omega` and `Mikro` Symbols are escaped respectively
307
  - Currently Work in Progress
 
334
  labelme --labels "connector" --config "{shift_auto_shape_color: 1}" --nodata
335
  ```
336
 
 
 
337
 
338
+ ## Licence
339
+ The entire content of this repository, including all image files, annotation files as well as sourcecode, metadata and documentation has been published under the [Creative Commons Attribution Share Alike Licence 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
classes_color.json CHANGED
@@ -71,6 +71,7 @@
71
 
72
  "magnetic": [0,230,230],
73
  "optical": [230,0,230],
 
74
 
75
  "unknown": [240,255,240]
76
  }
 
71
 
72
  "magnetic": [0,230,230],
73
  "optical": [230,0,230],
74
+ "explanatory": [230,100,100],
75
 
76
  "unknown": [240,255,240]
77
  }
consistency.py CHANGED
@@ -2,11 +2,11 @@
2
 
3
  # System Imports
4
  import os
5
- import sys
6
  import re
 
7
 
8
  # Project Imports
9
- from loader import load_classes, load_properties, read_dataset, write_dataset, file_name
10
  from utils import bbdist
11
 
12
  # Third-Party Imports
@@ -14,10 +14,10 @@ import matplotlib.pyplot as plt
14
  import numpy as np
15
 
16
  __author__ = "Johannes Bayer, Shabi Haider"
17
- __copyright__ = "Copyright 2021-2023, DFKI"
18
  __license__ = "CC"
19
  __version__ = "0.0.2"
20
- __email__ = "johannes.bayer@dfki.de"
21
  __status__ = "Prototype"
22
 
23
 
@@ -30,10 +30,10 @@ MAPPING_LOOKUP = {
30
  }
31
 
32
 
33
- def consistency(db: list, classes: dict, recover: dict = {}, skip_texts=False) -> tuple:
34
  """Checks Whether Annotation Classes are in provided Classes Dict and Attempts Recovery"""
35
 
36
- total, ok, mapped, faulty, rotation, text = 0, 0, 0, 0, 0, 0
37
 
38
  for sample in db:
39
  for annotation in sample["bboxes"] + sample["polygons"] + sample["points"]:
@@ -47,15 +47,21 @@ def consistency(db: list, classes: dict, recover: dict = {}, skip_texts=False) -
47
  mapped += 1
48
 
49
  if annotation["class"] not in classes and annotation["class"] not in recover:
50
- print(f"Can't recover faulty label in {file_name(sample)}: {annotation['class']}")
51
  faulty += 1
52
 
53
  if annotation["rotation"] is not None:
54
  rotation += 1
55
 
56
- if not skip_texts:
 
 
 
 
 
 
57
  if annotation["class"] == "text" and annotation["text"] is None:
58
- print(f"Missing Text in {file_name(sample)} -> {annotation['xmin']}, {annotation['ymin']}")
59
 
60
  if annotation["text"] is not None:
61
  if annotation["text"].strip() != annotation["text"]:
@@ -63,11 +69,23 @@ def consistency(db: list, classes: dict, recover: dict = {}, skip_texts=False) -
63
  annotation["text"] = annotation["text"].strip()
64
 
65
  if annotation["class"] != "text":
66
- print(f"Text string outside Text Annotation in {file_name(sample)} [{annotation['xmin']:4}, {annotation['ymin']:4}]: {annotation['class']}: {annotation['text']}")
67
 
68
  text += 1
69
 
70
- return total, ok, mapped, faulty, rotation, text
 
 
 
 
 
 
 
 
 
 
 
 
71
 
72
 
73
  def consistency_circuit(db: list, classes: dict) -> None:
@@ -86,6 +104,22 @@ def consistency_circuit(db: list, classes: dict) -> None:
86
  print(f" Circuit {circuit}: {cls}: {check}")
87
 
88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89
  def circuit_annotations(db: list, classes: dict) -> None:
90
  """Plots the Annotations per Sample and Class"""
91
 
@@ -144,15 +178,31 @@ def class_distribution(db: list, classes: dict) -> None:
144
  plt.show()
145
 
146
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
147
  def class_sizes(db: list, classes: dict) -> None:
148
  """"""
149
 
150
  plt.title('BB Sizes')
 
151
  plt.boxplot([[max(bbox["xmax"]-bbox["xmin"], bbox["ymax"]-bbox["ymin"])
152
  for sample in db for bbox in sample["bboxes"] if bbox["class"] == cls]
153
- for cls in classes])
154
  class_nbrs = np.arange(len(classes))+1
155
- plt.xticks(class_nbrs, labels=classes, rotation=90)
 
156
  plt.show()
157
 
158
 
@@ -161,19 +211,21 @@ def image_count(drafter: int = None, segmentation: bool = False) -> int:
161
 
162
  return len([file_name for root, _, files in os.walk(".")
163
  for file_name in files
164
- if ("segmentation" if segmentation else "annotation") in root and
165
- (not drafter or f"drafter_{drafter}{os.sep}" in root)])
166
 
167
 
168
- def read_check_write(classes: dict, drafter: int = None, segmentation: bool = False) -> list:
 
169
  """Reads Annotations, Checks Consistency with Provided Classes
170
  Writes Corrected Annotations Back and Returns the Annotations"""
171
 
172
  db = read_dataset(drafter=drafter, segmentation=segmentation)
173
- ann_total, ann_ok, ann_mapped, ann_faulty, ann_rot, ann_text = consistency(db,
174
- classes,
175
- MAPPING_LOOKUP,
176
- skip_texts=segmentation)
 
177
  write_dataset(db, segmentation=segmentation)
178
 
179
  print("")
@@ -188,7 +240,9 @@ def read_check_write(classes: dict, drafter: int = None, segmentation: bool = Fa
188
  print(f"Faulty Annotations (no recovery): {ann_faulty}")
189
  print(f"Corrected Annotations by Mapping: {ann_mapped}")
190
  print(f"Annotations with Rotation: {ann_rot}")
 
191
  print(f"Annotations with Text: {ann_text}")
 
192
 
193
  return db
194
 
@@ -321,16 +375,46 @@ def text_statistics(db: list, plot_unique_labels: bool = False):
321
 
322
 
323
  if __name__ == "__main__":
324
- drafter_selected = int(sys.argv[1]) if len(sys.argv) == 2 else None
325
- classes = load_classes()
326
 
327
- db_bb = read_check_write(classes, drafter_selected)
328
- db_poly = read_check_write(classes, drafter_selected, segmentation=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
329
 
330
- class_sizes(db_bb, classes)
331
- circuit_annotations(db_bb, classes)
332
- annotation_distribution(db_bb)
333
- class_distribution(db_bb, classes)
334
- class_distribution(db_poly, classes)
335
- consistency_circuit(db_bb, classes)
336
- text_statistics(db_bb)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
  # System Imports
4
  import os
 
5
  import re
6
+ import argparse
7
 
8
  # Project Imports
9
+ from loader import load_classes, load_properties, read_dataset, write_dataset, read_image, sample_name_tracable
10
  from utils import bbdist
11
 
12
  # Third-Party Imports
 
14
  import numpy as np
15
 
16
  __author__ = "Johannes Bayer, Shabi Haider"
17
+ __copyright__ = "Copyright 2021-2023, DFKI, 2024-2025, Johannes Bayer"
18
  __license__ = "CC"
19
  __version__ = "0.0.2"
20
+ __email__ = "johannes.bayer@mail.de"
21
  __status__ = "Prototype"
22
 
23
 
 
30
  }
31
 
32
 
33
+ def consistency(db: list, classes: dict, recover: dict = {}, check_texts=True, check_images=True) -> tuple:
34
  """Checks Whether Annotation Classes are in provided Classes Dict and Attempts Recovery"""
35
 
36
+ total, ok, mapped, faulty, rotation, mirror_h, mirror_v, text = 0, 0, 0, 0, 0, 0, 0, 0
37
 
38
  for sample in db:
39
  for annotation in sample["bboxes"] + sample["polygons"] + sample["points"]:
 
47
  mapped += 1
48
 
49
  if annotation["class"] not in classes and annotation["class"] not in recover:
50
+ print(f"Can't recover faulty label in {sample_name_tracable(sample)}: {annotation['class']}")
51
  faulty += 1
52
 
53
  if annotation["rotation"] is not None:
54
  rotation += 1
55
 
56
+ if annotation["mirror_horizontal"]:
57
+ mirror_h += 1
58
+
59
+ if annotation["mirror_vertical"]:
60
+ mirror_v += 1
61
+
62
+ if check_texts:
63
  if annotation["class"] == "text" and annotation["text"] is None:
64
+ print(f"Missing Text in {sample_name_tracable(sample)} -> {annotation['xmin']}, {annotation['ymin']}")
65
 
66
  if annotation["text"] is not None:
67
  if annotation["text"].strip() != annotation["text"]:
 
69
  annotation["text"] = annotation["text"].strip()
70
 
71
  if annotation["class"] != "text":
72
+ print(f"Text string outside Text Annotation in {sample_name_tracable(sample)} [{annotation['xmin']:4}, {annotation['ymin']:4}]: {annotation['class']}: {annotation['text']}")
73
 
74
  text += 1
75
 
76
+ if check_images:
77
+ try:
78
+ height, width, _ = read_image(sample).shape
79
+
80
+ if (not sample['width'] == width) or (not sample['height'] == height):
81
+ sample['width'] = width
82
+ sample['height'] = height
83
+ print(f"Corrected Image Dimensions in Sample {sample_name_tracable(sample)}")
84
+
85
+ except AttributeError:
86
+ print(f"Missing or Corrupt Image for Sample {sample_name_tracable(sample)}")
87
+
88
+ return total, ok, mapped, faulty, rotation, mirror_h, mirror_v, text
89
 
90
 
91
  def consistency_circuit(db: list, classes: dict) -> None:
 
104
  print(f" Circuit {circuit}: {cls}: {check}")
105
 
106
 
107
+
108
+ def consistency_text(db: list) -> None:
109
+ """Reports all Text Labels that Exist in a Strict Subset of Image Annotations of the same Circuit"""
110
+
111
+ for circuit in set(sample["circuit"] for sample in db):
112
+ circuit_samples = [sample for sample in db if sample["circuit"] == circuit]
113
+
114
+ circuit_samples_texts = [sorted([bbox["text"] for bbox in sample["bboxes"] if bbox["text"]])
115
+ for sample in circuit_samples]
116
+
117
+ print(circuit)
118
+ for c in circuit_samples_texts:
119
+ print(c)
120
+
121
+
122
+
123
  def circuit_annotations(db: list, classes: dict) -> None:
124
  """Plots the Annotations per Sample and Class"""
125
 
 
178
  plt.show()
179
 
180
 
181
+ def image_sizes(db: list) -> None:
182
+ """Statistics of the Raw Image's Widths and Heights"""
183
+
184
+ widths = [sample['width'] for sample in db]
185
+ heights = [sample['height'] for sample in db]
186
+ print(f"Raw Image Width Range: [{min(widths)}, {max(widths)}]")
187
+ print(f"Raw Image Height Range: [{min(heights)}, {max(heights)}]")
188
+
189
+ plt.title('Image Sizes')
190
+ plt.boxplot([heights, widths], vert=False)
191
+ plt.yticks([2, 1], labels=["width", "height"])
192
+ plt.show()
193
+
194
+
195
  def class_sizes(db: list, classes: dict) -> None:
196
  """"""
197
 
198
  plt.title('BB Sizes')
199
+
200
  plt.boxplot([[max(bbox["xmax"]-bbox["xmin"], bbox["ymax"]-bbox["ymin"])
201
  for sample in db for bbox in sample["bboxes"] if bbox["class"] == cls]
202
+ for cls in list(classes)[::-1]], vert=False)
203
  class_nbrs = np.arange(len(classes))+1
204
+ plt.yticks(class_nbrs, labels=list(classes)[::-1])
205
+ plt.tight_layout()
206
  plt.show()
207
 
208
 
 
211
 
212
  return len([file_name for root, _, files in os.walk(".")
213
  for file_name in files
214
+ if (f"segmentation{os.sep}" if segmentation else "annotation") in root and
215
+ (drafter is None or f"drafter_{drafter}{os.sep}" in root)])
216
 
217
 
218
+ def read_check_write(classes: dict, drafter: int = None, segmentation: bool = False,
219
+ check_images: bool = False, check_texts: bool = False) -> list:
220
  """Reads Annotations, Checks Consistency with Provided Classes
221
  Writes Corrected Annotations Back and Returns the Annotations"""
222
 
223
  db = read_dataset(drafter=drafter, segmentation=segmentation)
224
+ ann_total, ann_ok, ann_mapped, ann_faulty, ann_rot, ann_mirror_h, ann_mirror_v, ann_text = consistency(db,
225
+ classes,
226
+ MAPPING_LOOKUP,
227
+ check_texts=check_texts and not segmentation,
228
+ check_images=check_images)
229
  write_dataset(db, segmentation=segmentation)
230
 
231
  print("")
 
240
  print(f"Faulty Annotations (no recovery): {ann_faulty}")
241
  print(f"Corrected Annotations by Mapping: {ann_mapped}")
242
  print(f"Annotations with Rotation: {ann_rot}")
243
+ print(f"Annotations with Mirror: {ann_mirror_h+ann_mirror_v} = {ann_mirror_h}(H) + {ann_mirror_v}(V)")
244
  print(f"Annotations with Text: {ann_text}")
245
+ print("")
246
 
247
  return db
248
 
 
375
 
376
 
377
  if __name__ == "__main__":
 
 
378
 
379
+ # Prepare Argument Parser
380
+ parser = argparse.ArgumentParser(prog='CGHD Consistency',
381
+ description="Performs Integrity Checks and Statistics on the Dataset.")
382
+ parser.add_argument("-d", "--drafter", type=int, default=None,
383
+ help="Performs the actions on a given drafter only. If none is given, the entire dataset is used.")
384
+ parser.add_argument('-i', "--image-check", action='store_true',
385
+ help="Enables Image Dimension Verification")
386
+ parser.add_argument('-c', "--text-check", action='store_true',
387
+ help="searches for text labels outside text annotations and text annotations without text Label")
388
+ parser.add_argument('-a', "--annotation-consistency", action='store_true',
389
+ help="Enables Annotation Consistency Check (Class Count between Images of the Same Circuit)")
390
+ parser.add_argument('-t', "--text-consistency", action='store_true',
391
+ help="Enables Text Consistency Check (Label Equality between Images of the same Circuit)")
392
+ parser.add_argument('-s', "--statistics", action='store_true',
393
+ help="Performs Extended Statistics")
394
+ args = parser.parse_args()
395
+
396
+ # Load Class Info
397
+ classes = load_classes()
398
 
399
+ # Basic Integrity Checks
400
+ db_bb = read_check_write(classes, args.drafter, segmentation=False,
401
+ check_images=args.image_check, check_texts=args.text_check)
402
+ db_poly = read_check_write(classes, args.drafter, segmentation=True,
403
+ check_images=args.image_check, check_texts=args.text_check)
404
+
405
+ # Consistency Checks between Images of the Same Circuit
406
+ if args.annotation_consistency:
407
+ consistency_circuit(db_bb, classes)
408
+
409
+ if args.text_consistency:
410
+ consistency_text(db_bb)
411
+
412
+ # Statistics
413
+ if args.statistics:
414
+ image_sizes(db_bb)
415
+ class_sizes(db_bb, classes)
416
+ circuit_annotations(db_bb, classes)
417
+ annotation_distribution(db_bb)
418
+ class_distribution(db_bb, classes)
419
+ class_distribution(db_poly, classes)
420
+ text_statistics(db_bb)
loader.py CHANGED
@@ -5,10 +5,11 @@ import os, sys
5
  from os.path import join, realpath
6
  import json
7
  import xml.etree.ElementTree as ET
8
- from lxml import etree
9
 
10
  # Third Party Imports
11
  import cv2
 
 
12
 
13
  __author__ = "Johannes Bayer"
14
  __copyright__ = "Copyright 2022-2023, DFKI"
@@ -55,6 +56,12 @@ def sample_name(sample: dict) -> str:
55
  return f"C{sample['circuit']}_D{sample['drawing']}_P{sample['picture']}"
56
 
57
 
 
 
 
 
 
 
58
  def file_name(sample: dict) -> str:
59
  """return the Raw Image File Name of a Sample"""
60
 
@@ -81,6 +88,8 @@ def read_pascal_voc(path: str) -> dict:
81
  "ymin": int(annotation.find("bndbox/ymin").text),
82
  "ymax": int(annotation.find("bndbox/ymax").text),
83
  "rotation": int(annotation.find("bndbox/rotation").text) if annotation.find("bndbox/rotation") is not None else None,
 
 
84
  "text": annotation.find("text").text if annotation.find("text") is not None else None}
85
  for annotation in root.findall('object')],
86
  "polygons": [], "points": []}
@@ -115,6 +124,12 @@ def write_pascal_voc(sample: dict) -> None:
115
  if bbox["rotation"] is not None:
116
  etree.SubElement(xml_bbox, "rotation").text = str(bbox["rotation"])
117
 
 
 
 
 
 
 
118
  if bbox["text"]:
119
  etree.SubElement(xml_obj, "text").text = bbox["text"]
120
 
@@ -143,6 +158,8 @@ def read_labelme(path: str) -> dict:
143
  'ymax': max(point[1] for point in shape['points'])},
144
  'points': shape['points'],
145
  'rotation': shape.get('rotation', None),
 
 
146
  'text': shape.get('text', None),
147
  'group': shape.get('group_id', None)}
148
  for shape in json_data['shapes']
@@ -168,6 +185,8 @@ def write_labelme(geo_data: dict, path: str = None) -> None:
168
  'group_id': polygon.get('group', None),
169
  'description': polygon.get('description', None),
170
  **({'rotation': polygon['rotation']} if polygon.get('rotation', None) else {}),
 
 
171
  **({'text': polygon['text']} if polygon.get('text', None) else {}),
172
  'shape_type': 'polygon', 'flags': {}}
173
  for polygon in geo_data['polygons']] +
@@ -194,9 +213,9 @@ def read_dataset(drafter: int = None, circuit: int = None, segmentation=False, f
194
  return sorted([(read_labelme if segmentation else read_pascal_voc)(join(root, file_name))
195
  for root, _, files in os.walk(db_root)
196
  for file_name in files
197
- if (folder if folder else ("instances" if segmentation else "annotations")) in root and
198
  (not circuit or f"C{circuit}_" in file_name) and
199
- (not drafter or f"drafter_{drafter}{os.sep}" in root)],
200
  key=lambda sample: sample["circuit"]*100+sample["drawing"]*10+sample["picture"])
201
 
202
 
@@ -207,13 +226,18 @@ def write_dataset(db: list, segmentation=False) -> None:
207
  (write_labelme if segmentation else write_pascal_voc)(sample)
208
 
209
 
 
 
 
 
 
 
 
 
210
  def read_images(**kwargs) -> list:
211
  """Loads Images and BB Annotations and returns them as as List of Pairs"""
212
-
213
- db_root = os.sep.join(realpath(__file__).split(os.sep)[:-1])
214
 
215
- return [(cv2.imread(join(db_root, f"drafter_{sample['drafter']}", "images", file_name(sample))), sample)
216
- for sample in read_dataset(**kwargs)]
217
 
218
 
219
  def read_snippets(**kwargs):
@@ -242,3 +266,4 @@ if __name__ == "__main__":
242
  snippet = cv2.rotate(snippet, cv2.ROTATE_90_COUNTERCLOCKWISE)
243
 
244
  cv2.imwrite(join("test", f"{bbox['text']}___{sample}_{bbox['ymin']}_{bbox['ymax']}_{bbox['xmin']}_{bbox['xmax']}.png"), snippet)
 
 
5
  from os.path import join, realpath
6
  import json
7
  import xml.etree.ElementTree as ET
 
8
 
9
  # Third Party Imports
10
  import cv2
11
+ import numpy as np
12
+ from lxml import etree
13
 
14
  __author__ = "Johannes Bayer"
15
  __copyright__ = "Copyright 2022-2023, DFKI"
 
56
  return f"C{sample['circuit']}_D{sample['drawing']}_P{sample['picture']}"
57
 
58
 
59
+ def sample_name_tracable(sample: dict) -> str:
60
+ """Returns the Unambiguous, Human-Readable Sample Name"""
61
+
62
+ return f"Drafter{sample['drafter']}/{sample_name(sample)}"
63
+
64
+
65
  def file_name(sample: dict) -> str:
66
  """return the Raw Image File Name of a Sample"""
67
 
 
88
  "ymin": int(annotation.find("bndbox/ymin").text),
89
  "ymax": int(annotation.find("bndbox/ymax").text),
90
  "rotation": int(annotation.find("bndbox/rotation").text) if annotation.find("bndbox/rotation") is not None else None,
91
+ "mirror_horizontal": len([tag for tag in annotation.findall("bndbox/mirror") if tag.text=="horizontal"])>0,
92
+ "mirror_vertical": len([tag for tag in annotation.findall("bndbox/mirror") if tag.text=="vertical"])>0,
93
  "text": annotation.find("text").text if annotation.find("text") is not None else None}
94
  for annotation in root.findall('object')],
95
  "polygons": [], "points": []}
 
124
  if bbox["rotation"] is not None:
125
  etree.SubElement(xml_bbox, "rotation").text = str(bbox["rotation"])
126
 
127
+ if bbox["mirror_horizontal"]:
128
+ etree.SubElement(xml_bbox, "mirror").text = "horizontal"
129
+
130
+ if bbox["mirror_vertical"]:
131
+ etree.SubElement(xml_bbox, "mirror").text = "vertical"
132
+
133
  if bbox["text"]:
134
  etree.SubElement(xml_obj, "text").text = bbox["text"]
135
 
 
158
  'ymax': max(point[1] for point in shape['points'])},
159
  'points': shape['points'],
160
  'rotation': shape.get('rotation', None),
161
+ 'mirror_horizontal': shape.get('mirror_horizontal', None),
162
+ 'mirror_vertical': shape.get('mirror_vertical', None),
163
  'text': shape.get('text', None),
164
  'group': shape.get('group_id', None)}
165
  for shape in json_data['shapes']
 
185
  'group_id': polygon.get('group', None),
186
  'description': polygon.get('description', None),
187
  **({'rotation': polygon['rotation']} if polygon.get('rotation', None) else {}),
188
+ **({'mirror_horizontal': polygon['mirror_horizontal']} if polygon.get('mirror_horizontal') else {}),
189
+ **({'mirror_vertical': polygon['mirror_vertical']} if polygon.get('mirror_vertical') else {}),
190
  **({'text': polygon['text']} if polygon.get('text', None) else {}),
191
  'shape_type': 'polygon', 'flags': {}}
192
  for polygon in geo_data['polygons']] +
 
213
  return sorted([(read_labelme if segmentation else read_pascal_voc)(join(root, file_name))
214
  for root, _, files in os.walk(db_root)
215
  for file_name in files
216
+ if (folder if folder else (f"instances" if segmentation else f"annotations")) in root and
217
  (not circuit or f"C{circuit}_" in file_name) and
218
+ (drafter is None or f"drafter_{drafter}{os.sep}" in root)],
219
  key=lambda sample: sample["circuit"]*100+sample["drawing"]*10+sample["picture"])
220
 
221
 
 
226
  (write_labelme if segmentation else write_pascal_voc)(sample)
227
 
228
 
229
+ def read_image(sample: dict) -> np.ndarray:
230
+ """Loads the Image Associated with a DB Sample"""
231
+
232
+ db_root = os.sep.join(realpath(__file__).split(os.sep)[:-1])
233
+
234
+ return cv2.imread(join(db_root, f"drafter_{sample['drafter']}", "images", file_name(sample)))
235
+
236
+
237
  def read_images(**kwargs) -> list:
238
  """Loads Images and BB Annotations and returns them as as List of Pairs"""
 
 
239
 
240
+ return [(read_image(sample), sample) for sample in read_dataset(**kwargs)]
 
241
 
242
 
243
  def read_snippets(**kwargs):
 
266
  snippet = cv2.rotate(snippet, cv2.ROTATE_90_COUNTERCLOCKWISE)
267
 
268
  cv2.imwrite(join("test", f"{bbox['text']}___{sample}_{bbox['ymin']}_{bbox['ymax']}_{bbox['xmin']}_{bbox['xmax']}.png"), snippet)
269
+