File size: 9,978 Bytes
18ddfe2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 |
# Open Images Challenge Evaluation
The Object Detection API is currently supporting several evaluation metrics used
in the
[Open Images Challenge 2018](https://storage.googleapis.com/openimages/web/challenge.html)
and
[Open Images Challenge 2019](https://storage.googleapis.com/openimages/web/challenge2019.html).
In addition, several data processing tools are available. Detailed instructions
on using the tools for each track are available below.
**NOTE:** all data links are updated to the Open Images Challenge 2019.
## Object Detection Track
The
[Object Detection metric](https://storage.googleapis.com/openimages/web/evaluation.html#object_detection_eval)
protocol requires a pre-processing of the released data to ensure correct
evaluation. The released data contains only leaf-most bounding box annotations
and image-level labels. The evaluation metric implementation is available in the
class `OpenImagesChallengeEvaluator`.
1. Download
[class hierarchy of Open Images Detection Challenge 2019](https://storage.googleapis.com/openimages/challenge_2019/challenge-2019-label500-hierarchy.json)
in JSON format.
2. Download
[ground-truth boundling boxes](https://storage.googleapis.com/openimages/challenge_2019/challenge-2019-validation-detection-bbox.csv)
and
[image-level labels](https://storage.googleapis.com/openimages/challenge_2019/challenge-2019-validation-detection-human-imagelabels.csv).
3. Run the following command to create hierarchical expansion of the bounding
boxes and image-level label annotations:
```
HIERARCHY_FILE=/path/to/challenge-2019-label500-hierarchy.json
BOUNDING_BOXES=/path/to/challenge-2019-validation-detection-bbox
IMAGE_LABELS=/path/to/challenge-2019-validation-detection-human-imagelabels
python object_detection/dataset_tools/oid_hierarchical_labels_expansion.py \
--json_hierarchy_file=${HIERARCHY_FILE} \
--input_annotations=${BOUNDING_BOXES}.csv \
--output_annotations=${BOUNDING_BOXES}_expanded.csv \
--annotation_type=1
python object_detection/dataset_tools/oid_hierarchical_labels_expansion.py \
--json_hierarchy_file=${HIERARCHY_FILE} \
--input_annotations=${IMAGE_LABELS}.csv \
--output_annotations=${IMAGE_LABELS}_expanded.csv \
--annotation_type=2
```
1. If you are not using Tensorflow, you can run evaluation directly using your
algorithm's output and generated ground-truth files. {value=4}
After step 3 you produced the ground-truth files suitable for running 'OID
Challenge Object Detection Metric 2019' evaluation. To run the evaluation, use
the following command:
```
INPUT_PREDICTIONS=/path/to/detection_predictions.csv
OUTPUT_METRICS=/path/to/output/metrics/file
python models/research/object_detection/metrics/oid_challenge_evaluation.py \
--input_annotations_boxes=${BOUNDING_BOXES}_expanded.csv \
--input_annotations_labels=${IMAGE_LABELS}_expanded.csv \
--input_class_labelmap=object_detection/data/oid_object_detection_challenge_500_label_map.pbtxt \
--input_predictions=${INPUT_PREDICTIONS} \
--output_metrics=${OUTPUT_METRICS} \
```
Note that predictions file must contain the following keys:
ImageID,LabelName,Score,XMin,XMax,YMin,YMax
For the Object Detection Track, the participants will be ranked on:
- "OpenImagesDetectionChallenge_Precision/[email protected]"
To use evaluation within Tensorflow training, use metric name
`oid_challenge_detection_metrics` in the evaluation config.
## Instance Segmentation Track
The
[Instance Segmentation metric](https://storage.googleapis.com/openimages/web/evaluation.html#instance_segmentation_eval)
can be directly evaluated using the ground-truth data and model predictions. The
evaluation metric implementation is available in the class
`OpenImagesChallengeEvaluator`.
1. Download
[class hierarchy of Open Images Instance Segmentation Challenge 2019](https://storage.googleapis.com/openimages/challenge_2019/challenge-2019-label300-segmentable-hierarchy.json)
in JSON format.
2. Download
[ground-truth bounding boxes](https://storage.googleapis.com/openimages/challenge_2019/challenge-2019-validation-segmentation-bbox.csv)
and
[image-level labels](https://storage.googleapis.com/openimages/challenge_2019/challenge-2019-validation-segmentation-labels.csv).
3. Download instance segmentation files for the validation set (see
[Open Images Challenge Downloads page](https://storage.googleapis.com/openimages/web/challenge2019_downloads.html)).
The download consists of a set of .zip archives containing binary .png
masks.
Those should be transformed into a single CSV file in the format:
ImageID,LabelName,ImageWidth,ImageHeight,XMin,YMin,XMax,YMax,IsGroupOf,Mask
where Mask is MS COCO RLE encoding, compressed with zip, and re-coded with
base64 encoding of a binary mask stored in .png file. See an example
implementation of the encoding function
[here](https://gist.github.com/pculliton/209398a2a52867580c6103e25e55d93c).
1. Run the following command to create hierarchical expansion of the instance
segmentation, bounding boxes and image-level label annotations: {value=4}
```
HIERARCHY_FILE=/path/to/challenge-2019-label300-hierarchy.json
BOUNDING_BOXES=/path/to/challenge-2019-validation-detection-bbox
IMAGE_LABELS=/path/to/challenge-2019-validation-detection-human-imagelabels
python object_detection/dataset_tools/oid_hierarchical_labels_expansion.py \
--json_hierarchy_file=${HIERARCHY_FILE} \
--input_annotations=${BOUNDING_BOXES}.csv \
--output_annotations=${BOUNDING_BOXES}_expanded.csv \
--annotation_type=1
python object_detection/dataset_tools/oid_hierarchical_labels_expansion.py \
--json_hierarchy_file=${HIERARCHY_FILE} \
--input_annotations=${IMAGE_LABELS}.csv \
--output_annotations=${IMAGE_LABELS}_expanded.csv \
--annotation_type=2
python object_detection/dataset_tools/oid_hierarchical_labels_expansion.py \
--json_hierarchy_file=${HIERARCHY_FILE} \
--input_annotations=${INSTANCE_SEGMENTATIONS}.csv \
--output_annotations=${INSTANCE_SEGMENTATIONS}_expanded.csv \
--annotation_type=1
```
1. If you are not using Tensorflow, you can run evaluation directly using your
algorithm's output and generated ground-truth files. {value=4}
```
INPUT_PREDICTIONS=/path/to/instance_segmentation_predictions.csv
OUTPUT_METRICS=/path/to/output/metrics/file
python models/research/object_detection/metrics/oid_challenge_evaluation.py \
--input_annotations_boxes=${BOUNDING_BOXES}_expanded.csv \
--input_annotations_labels=${IMAGE_LABELS}_expanded.csv \
--input_class_labelmap=object_detection/data/oid_object_detection_challenge_500_label_map.pbtxt \
--input_predictions=${INPUT_PREDICTIONS} \
--input_annotations_segm=${INSTANCE_SEGMENTATIONS}_expanded.csv
--output_metrics=${OUTPUT_METRICS} \
```
Note that predictions file must contain the following keys:
ImageID,ImageWidth,ImageHeight,LabelName,Score,Mask
Mask must be encoded the same way as groundtruth masks.
For the Instance Segmentation Track, the participants will be ranked on:
- "OpenImagesInstanceSegmentationChallenge_Precision/[email protected]"
## Visual Relationships Detection Track
The
[Visual Relationships Detection metrics](https://storage.googleapis.com/openimages/web/evaluation.html#visual_relationships_eval)
can be directly evaluated using the ground-truth data and model predictions. The
evaluation metric implementation is available in the class
`VRDRelationDetectionEvaluator`,`VRDPhraseDetectionEvaluator`.
1. Download the ground-truth
[visual relationships annotations](https://storage.googleapis.com/openimages/challenge_2019/challenge-2019-validation-vrd.csv)
and
[image-level labels](https://storage.googleapis.com/openimages/challenge_2019/challenge-2019-validation-vrd-labels.csv).
2. Run the follwing command to produce final metrics:
```
INPUT_ANNOTATIONS_BOXES=/path/to/challenge-2018-train-vrd.csv
INPUT_ANNOTATIONS_LABELS=/path/to/challenge-2018-train-vrd-labels.csv
INPUT_PREDICTIONS=/path/to/predictions.csv
INPUT_CLASS_LABELMAP=/path/to/oid_object_detection_challenge_500_label_map.pbtxt
INPUT_RELATIONSHIP_LABELMAP=/path/to/relationships_labelmap.pbtxt
OUTPUT_METRICS=/path/to/output/metrics/file
echo "item { name: '/m/02gy9n' id: 602 display_name: 'Transparent' }
item { name: '/m/05z87' id: 603 display_name: 'Plastic' }
item { name: '/m/0dnr7' id: 604 display_name: '(made of)Textile' }
item { name: '/m/04lbp' id: 605 display_name: '(made of)Leather' }
item { name: '/m/083vt' id: 606 display_name: 'Wooden'}
">>${INPUT_CLASS_LABELMAP}
echo "item { name: 'at' id: 1 display_name: 'at' }
item { name: 'on' id: 2 display_name: 'on (top of)' }
item { name: 'holds' id: 3 display_name: 'holds' }
item { name: 'plays' id: 4 display_name: 'plays' }
item { name: 'interacts_with' id: 5 display_name: 'interacts with' }
item { name: 'wears' id: 6 display_name: 'wears' }
item { name: 'is' id: 7 display_name: 'is' }
item { name: 'inside_of' id: 8 display_name: 'inside of' }
item { name: 'under' id: 9 display_name: 'under' }
item { name: 'hits' id: 10 display_name: 'hits' }
"> ${INPUT_RELATIONSHIP_LABELMAP}
python object_detection/metrics/oid_vrd_challenge_evaluation.py \
--input_annotations_boxes=${INPUT_ANNOTATIONS_BOXES} \
--input_annotations_labels=${INPUT_ANNOTATIONS_LABELS} \
--input_predictions=${INPUT_PREDICTIONS} \
--input_class_labelmap=${INPUT_CLASS_LABELMAP} \
--input_relationship_labelmap=${INPUT_RELATIONSHIP_LABELMAP} \
--output_metrics=${OUTPUT_METRICS}
```
Note that predictions file must contain the following keys:
ImageID,LabelName1,LabelName2,RelationshipLabel,Score,XMin1,XMax1,YMin1,YMax1,XMin2,XMax2,YMin2,YMax2
The participants of the challenge will be evaluated by weighted average of the following three metrics:
- "VRDMetric_Relationships[email protected]"
- "VRDMetric_Relationships_Recall@[email protected]"
- "VRDMetric_Phrases[email protected]"
|