Spaces:
No application file
No application file
title: Detection Metrics | |
emoji: π | |
colorFrom: indigo | |
colorTo: green | |
sdk: gradio | |
sdk_version: 3.36.1 | |
#app_file: app.py | |
pinned: false | |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference | |
# COCO Metrics | |
COCO Metrics is a Python package that provides evaluation metrics for **object detection** tasks using the COCO (Common Objects in Context) [evaluation protocol](https://cocodataset.org/#detection-eval). In the future instance segmentation tasks will also be supported. | |
## Advantages | |
* This project does not depend directly on [pycocotools](https://github.com/cocodataset/cocoapi/tree/master/PythonAPI/pycocotools), COCO's official code to compute metrics. | |
* It does not require a `.json` file in disk, as originally required by pycocotools. | |
* Integrated with HuggingFace π€ evaluate model. | |
## Metrics | |
The following 12 metrics are computed to for characterizing the performance of an object detector: | |
* **Average Precision** (AP) IoU=.50:.05:.95 | |
* **Average Precision** (AP) IoU=.50 | |
* **Average Precision** (AP) IoU=.75 | |
* **Average Precision** (AP) Across Scales for small objects: area < 322 | |
* **Average Precision** (AP) Across Scales for medium objects: 322 < area < 962 | |
* **Average Precision** (AP) Across Scales for large objects: area < 962 | |
* **Average Recall** (AR) given 1 detection per image | |
* **Average Recall** (AR) given 10 detections per image | |
* **Average Recall** (AR) given 100 detections per image | |
* **Average Recall** (AR) for small objects: area < 322 | |
* **Average Recall** (AR) for medium objects: 322 < area < 962 | |
* **Average Recall** (AR) for large objects: area > 962 | |
## Installation | |
COCO Metrics can be easily installed using pip: | |
``` | |
# Clone the project | |
git clone https://github.com/rafaelpadilla/coco_metrics | |
cd coco_metrics | |
# Create environment | |
conda create -n coco-metrics python=3.10 | |
conda activate coco-metrics | |
# Install packages | |
pip install -r requirements.txt | |
``` | |
## Example | |
The code [example.py](https://github.com/rafaelpadilla/coco_metrics/blob/main/example.py) shows how to make usage of the COCO evaluator using HuggingFace π€ evaluate model. | |
The snippet below illustrates how to make call of the evaluator. | |
``` | |
import evaluate | |
# Load evaluate from github | |
coco_bbx_evaluator = evaluate.load("rafaelpadilla/detection_metrics", coco=coco_gt, iou_type="bbox") | |
# Within your dataset looping, you add the metrics | |
for batch in dataloader: | |
results = # model compute predicted results | |
labels = # get ground-truth labels | |
# Add prediction and expected labels to the evaluator | |
coco_bbx_evaluator.add(prediction=results, reference=labels) | |
# Compute the metrics and show results | |
results = coco_bbx_evaluator.compute() | |
print(results) | |
``` | |
## References: | |
[1] [COCO Metrics](https://cocodataset.org/#detection-eval) | |
[2] [A Survey on Performance Metrics for Object-Detection Algorithms](https://www.researchgate.net/profile/Rafael-Padilla/publication/343194514_A_Survey_on_Performance_Metrics_for_Object-Detection_Algorithms/links/5f1b5a5e45851515ef478268/A-Survey-on-Performance-Metrics-for-Object-Detection-Algorithms.pdf) | |
[3] [A comparative analysis of object detection metrics with a companion open-source toolkit](https://www.researchgate.net/profile/Rafael-Padilla/publication/343194514_A_Survey_on_Performance_Metrics_for_Object-Detection_Algorithms/links/5f1b5a5e45851515ef478268/A-Survey-on-Performance-Metrics-for-Object-Detection-Algorithms.pdf) | |