Spaces:
No application file
A newer version of the Gradio SDK is available:
5.22.0
title: Detection Metrics
emoji: π
colorFrom: indigo
colorTo: green
sdk: gradio
sdk_version: 3.36.1
pinned: false
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
COCO Metrics
COCO Metrics is a Python package that provides evaluation metrics for object detection tasks using the COCO (Common Objects in Context) evaluation protocol. In the future instance segmentation tasks will also be supported.
Advantages
- This project does not depend directly on pycocotools, COCO's official code to compute metrics.
- It does not require a
.json
file in disk, as originally required by pycocotools. - Integrated with HuggingFace π€ evaluate model.
Metrics
The following 12 metrics are computed to for characterizing the performance of an object detector:
- Average Precision (AP) IoU=.50:.05:.95
- Average Precision (AP) IoU=.50
- Average Precision (AP) IoU=.75
- Average Precision (AP) Across Scales for small objects: area < 322
- Average Precision (AP) Across Scales for medium objects: 322 < area < 962
- Average Precision (AP) Across Scales for large objects: area < 962
- Average Recall (AR) given 1 detection per image
- Average Recall (AR) given 10 detections per image
- Average Recall (AR) given 100 detections per image
- Average Recall (AR) for small objects: area < 322
- Average Recall (AR) for medium objects: 322 < area < 962
- Average Recall (AR) for large objects: area > 962
Installation
COCO Metrics can be easily installed using pip:
# Clone the project
git clone https://github.com/rafaelpadilla/coco_metrics
cd coco_metrics
# Create environment
conda create -n coco-metrics python=3.10
conda activate coco-metrics
# Install packages
pip install -r requirements.txt
Example
The code example.py shows how to make usage of the COCO evaluator using HuggingFace π€ evaluate model.
The snippet below illustrates how to make call of the evaluator.
import evaluate
# Load evaluate from github
coco_bbx_evaluator = evaluate.load("rafaelpadilla/detection_metrics", coco=coco_gt, iou_type="bbox")
# Within your dataset looping, you add the metrics
for batch in dataloader:
results = # model compute predicted results
labels = # get ground-truth labels
# Add prediction and expected labels to the evaluator
coco_bbx_evaluator.add(prediction=results, reference=labels)
# Compute the metrics and show results
results = coco_bbx_evaluator.compute()
print(results)
References:
[1] COCO Metrics
[2] A Survey on Performance Metrics for Object-Detection Algorithms
[3] A comparative analysis of object detection metrics with a companion open-source toolkit