franzi2505 commited on
Commit
9d444ef
β€’
1 Parent(s): 1c14bfe

first commit

Browse files
Files changed (5) hide show
  1. README.md +39 -6
  2. app.py +6 -0
  3. gitattributes +35 -0
  4. pq.py +180 -0
  5. requirements.txt +3 -0
README.md CHANGED
@@ -1,13 +1,46 @@
1
  ---
2
  title: PanopticQuality
3
- emoji: 🏒
4
- colorFrom: yellow
5
- colorTo: red
 
 
6
  sdk: gradio
7
- sdk_version: 4.27.0
8
  app_file: app.py
9
  pinned: false
10
- license: apache-2.0
11
  ---
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: PanopticQuality
3
+ tags:
4
+ - evaluate
5
+ - metric
6
+ description: >-
7
+ PanopticQuality score
8
  sdk: gradio
9
+ sdk_version: 3.19.1
10
  app_file: app.py
11
  pinned: false
12
+ emoji: πŸ•΅οΈ
13
  ---
14
 
15
+ # SEA-AI/PanopticQuality
16
+
17
+ This hugging face metric uses `seametrics.segmentation.PanopticQuality` under the hood to compute a panoptic quality score. It is a wrapper class for the torchmetrics class [`torchmetrics.detection.PanopticQuality`](https://lightning.ai/docs/torchmetrics/stable/detection/panoptic_quality.html).
18
+
19
+ ## Getting Started
20
+
21
+ To get started with PanopticQuality, make sure you have the necessary dependencies installed. This metric relies on the `evaluate`, `seametrics` and `seametrics[segmentation]`libraries for metric calculation and integration with FiftyOne datasets.
22
+
23
+ ### Installation
24
+
25
+ First, ensure you have Python 3.8 or later installed. Then, install det-metrics using pip:
26
+
27
+ ```sh
28
+ pip install evaluate git+https://github.com/SEA-AI/seametrics@develop
29
+ ```
30
+
31
+ ### Basic Usage
32
+
33
+
34
+ ## Metric Settings
35
+
36
+ ## Output Values
37
+
38
+ ## Further References
39
+
40
+ - **seametrics Library**: Explore the [seametrics GitHub repository](https://github.com/SEA-AI/seametrics/tree/main) for more details on the underlying library.
41
+ - **Torchmetrics**: https://lightning.ai/docs/torchmetrics/stable/detection/panoptic_quality.html
42
+ - **Understanding Metrics**: The Panoptic Segmentation task, as well as Panoptic Quality as the evaluation metric, were introduced [in this paper](https://arxiv.org/pdf/1801.00868.pdf).
43
+
44
+ ## Contribution
45
+
46
+ Your contributions are welcome! If you'd like to improve SEA-AI/PanopticQuality or add new features, please feel free to fork the repository, make your changes, and submit a pull request.
app.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ import evaluate
2
+ from evaluate.utils import launch_gradio_widget
3
+
4
+
5
+ module = evaluate.load("SEA-AI/PanopticQuality")
6
+ launch_gradio_widget(module)
gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
pq.py ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """TODO: Add a description here."""
15
+
16
+ from typing import Set
17
+ from deprecated import deprecated
18
+
19
+ import evaluate
20
+ import datasets
21
+ import numpy as np
22
+
23
+ from seametrics.segmentation import PanopticQuality
24
+
25
+ _CITATION = """\
26
+ @inproceedings{DBLP:conf/cvpr/KirillovHGRD19,
27
+ author = {Alexander Kirillov and
28
+ Kaiming He and
29
+ Ross B. Girshick and
30
+ Carsten Rother and
31
+ Piotr Doll{\'{a}}r},
32
+ title = {Panoptic Segmentation},
33
+ booktitle = {{IEEE} Conference on Computer Vision and Pattern Recognition, {CVPR}
34
+ 2019, Long Beach, CA, USA, June 16-20, 2019},
35
+ pages = {9404--9413},
36
+ publisher = {Computer Vision Foundation / {IEEE}},
37
+ year = {2019},
38
+ url = {http://openaccess.thecvf.com/content\_CVPR\_2019/html/Kirillov\_Panoptic\_Segmentation\_CVPR\_2019\_paper.html
39
+ }
40
+ """
41
+
42
+ _DESCRIPTION = """\
43
+ This evaluation metric calculates Panoptic Quality (PQ) for panoptic segmentation masks.
44
+ """
45
+
46
+
47
+ _KWARGS_DESCRIPTION = """
48
+ Calculates PQ-score given predicted and ground truth panoptic segmentation masks.
49
+ Args:
50
+ predictions: a 4-d array of shape (batch_size, img_height, img_width, 2).
51
+ The last dimension should hold the category index at position 0, and
52
+ the instance ID at position 1.
53
+ references: a 4-d array of shape (batch_size, img_height, img_width, 2).
54
+ The last dimension should hold the category index at position 0, and
55
+ the instance ID at position 1.
56
+ Returns:
57
+ A single float number in range [0, 1] that represents the PQ score.
58
+ 1 is perfect panoptic segmentation, 0 is worst possible panoptic segmentation.
59
+ Examples:
60
+ >>> import evaluate
61
+ >>> from seametrics.fo_utils.utils import fo_to_payload
62
+ >>> MODEL_FIELD = ["maskformer-27k-100ep"]
63
+ >>> payload = fo_to_payload("SAILING_PANOPTIC_DATASET_QA",
64
+ >>> gt_field="ground_truth_det",
65
+ >>> models=MODEL_FIELD,
66
+ >>> sequence_list=["Trip_55_Seq_2", "Trip_197_Seq_1", "Trip_197_Seq_68"],
67
+ >>> excluded_classes=[""])
68
+ >>> module = evaluate.load("SEA-AI/PanopticQuality")
69
+ >>> module.add_payload(payload, model_name=MODEL_FIELD[0])
70
+ >>> module.compute()
71
+ 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:03<00:00, 1.30s/it]
72
+ Added data ...
73
+ Start computing ...
74
+ Finished!
75
+ tensor(0.2082, dtype=torch.float64)
76
+ """
77
+
78
+
79
+ @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
80
+ class PQMetric(evaluate.Metric):
81
+ def __init__(
82
+ self,
83
+ label2id: dict = None,
84
+ stuff: list = None,
85
+ **kwargs
86
+ ):
87
+ super().__init__(**kwargs)
88
+ DEFAULT_LABEL2ID = {'WATER': 0,
89
+ 'SKY': 1,
90
+ 'LAND': 2,
91
+ 'MOTORBOAT': 3,
92
+ 'FAR_AWAY_OBJECT': 4,
93
+ 'SAILING_BOAT_WITH_CLOSED_SAILS': 5,
94
+ 'SHIP': 6,
95
+ 'WATERCRAFT': 7,
96
+ 'SPHERICAL_BUOY': 8,
97
+ 'CONSTRUCTION': 9,
98
+ 'FLOTSAM': 10,
99
+ 'SAILING_BOAT_WITH_OPEN_SAILS': 11,
100
+ 'CONTAINER': 12,
101
+ 'PILLAR_BUOY': 13}
102
+ DEFAULT_STUFF = ["WATER", "SKY", "LAND", "CONSTRUCTION", "ICE", "OWN_BOAT"]
103
+ self.label2id = label2id if label2id is not None else DEFAULT_LABEL2ID
104
+ self.stuff = stuff if stuff is not None else DEFAULT_STUFF
105
+ self.pq_metric = PanopticQuality(
106
+ things=set([self.label2id[label] for label in self.label2id.keys() if label not in self.stuff]),
107
+ stuffs=set([self.label2id[label] for label in self.label2id.keys() if label in self.stuff])
108
+ )
109
+
110
+ def _info(self):
111
+ return evaluate.MetricInfo(
112
+ # This is the description that will appear on the modules page.
113
+ module_type="metric",
114
+ description=_DESCRIPTION,
115
+ citation=_CITATION,
116
+ inputs_description=_KWARGS_DESCRIPTION,
117
+ # This defines the format of each prediction and reference
118
+ features=datasets.Features(
119
+ {
120
+ "predictions": datasets.Sequence(
121
+ datasets.Sequence(
122
+ datasets.Sequence(
123
+ datasets.Sequence(datasets.Value("float"))
124
+ )
125
+ ),
126
+ ),
127
+ "references": datasets.Sequence( # batch
128
+ datasets.Sequence( # img height
129
+ datasets.Sequence( # img width
130
+ datasets.Sequence(datasets.Value("float")) # 2
131
+ )
132
+ ),
133
+ ),
134
+ }
135
+ ),
136
+ # Additional links to the codebase or references
137
+ codebase_urls=[
138
+ "https://lightning.ai/docs/torchmetrics/stable/detection/panoptic_quality.html"
139
+ ],
140
+ )
141
+
142
+ def add(self, *, prediction, reference, **kwargs):
143
+ """Adds a batch of predictions and references to the metric"""
144
+ # in case the inputs are lists, convert them to numpy arrays
145
+
146
+ self.pq_metric.update(prediction, reference)
147
+
148
+ # does not impact the metric, but is required for the interface x_x
149
+ super(evaluate.Metric, self).add(
150
+ prediction=self._postprocess(prediction),
151
+ references=self._postprocess(reference),
152
+ **kwargs
153
+ )
154
+
155
+ def _compute(self, *, predictions, references, **kwargs):
156
+ """Called within the evaluate.Metric.compute() method"""
157
+ return self.pq_metric.compute()
158
+
159
+ def add_payload(self, payload: dict, model_name: str = None):
160
+ """Converts the payload to the format expected by the metric"""
161
+ # import only if needed since fiftyone is not a direct dependency
162
+ from seametrics.segmentation.utils import payload_to_seg_metric
163
+
164
+ predictions, references, label2id = payload_to_seg_metric(payload, model_name, self.label2id)
165
+ self.label2id = label2id
166
+ self.add(prediction=predictions, reference=references)
167
+
168
+ def _postprocess(self, np_array):
169
+ """Converts the numpy arrays to lists for type checking"""
170
+ return self._np_to_lists(np_array)
171
+
172
+ def _np_to_lists(self, d):
173
+ """datasets does not support numpy arrays for type checking"""
174
+ if isinstance(d, np.ndarray):
175
+ if d.ndim == 1:
176
+ return d.tolist()
177
+ else:
178
+ return [self._np_to_lists(sub_arr) for sub_arr in d]
179
+ else:
180
+ return d
requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ git+https://github.com/huggingface/evaluate@main
2
+ git+https://github.com/SEA-AI/seametrics@develop
3
+ fiftyone