Spaces:
Sleeping
Sleeping
title: Submission Template | |
emoji: 🔥 | |
colorFrom: yellow | |
colorTo: green | |
sdk: docker | |
pinned: false | |
# Wildfire Detection Task for the Frugal AI 2025 Challenge | |
## Task Overview | |
As part of the Frugal AI 2025 Challenge, I’m working on the wildfire detection task using the PyroNear/pyro-sdis dataset available on Hugging Face (https://huggingface.co/datasets/pyronear/pyro-sdis). This task aims to develop a model capable of detecting wildfires in images efficiently, contributing to early detection and mitigation of wildfire damage while minimizing environmental costs. | |
## Dataset Overview | |
This dataset is specifically designed for wildfire detection, containing labeled images with wildfire-related and non-wildfire-related scenes. | |
- Key Features: | |
Labels: Binary classification — wildfire present or not. | |
Images: Captured under real-world conditions, including diverse environments and challenging scenarios like smoke, clouds, and varying lighting. | |
Size: ~33 000 image labeled images, well-suited for training and validation of computer vision models. | |
- 28,103 images with smoke | |
- 31,975 smoke instances | |
This dataset is formatted to be compatible with the Ultralytics YOLO framework, enabling efficient training of object detection models. | |
Usage: Ideal for fine-tuning state-of-the-art models for wildfire detection tasks. | |
## Model Development Plan | |
Model Choice: YOLOv11s | |
Why YOLOv11s? | |
Efficiency: YOLO (You Only Look Once) models are known for their high-speed performance and accuracy, ideal for real-time applications. | |
Versatility: YOLOv11s builds upon prior versions, improving object detection, handling small objects, and performing well under challenging visual conditions. | |
Frugality: Optimized for computational efficiency, aligning with the sustainability goals of the Frugal AI Challenge. | |
### Requirements: | |
additional package: | |
- ultralytics | |
- torch | |
- numpy | |
Dockerfile had to be updated: | |
``` | |
USER root | |
RUN apt-get update && apt-get install -y libgl1 | |
# Switch to non-root user | |
USER user | |
``` | |
## Data Preprocessing: | |
Fine-Tuning YOLOv11: | |
Load a pre-trained YOLOv11s model as a starting point (transfer learning). | |
Replace the output layer to align with the binary classification task (wildfire vs. no wildfire). | |
Train the model using the PyroNear/pyro-sdis dataset. | |
## Evaluation: | |
Metrics: | |
Precision and recall to assess detection accuracy. | |
Inference time to evaluate real-time feasibility. | |
Using CodeCarbon, the model's carbon footprint and energy consumption will be tracked, this information will help ensure the model's alignment with the sustainability objectives of the Frugal AI Challenge. | |
## Results: | |
At the beginning of this challenge's evaluation was based on 20% of the train dataset, we had to be vigilant when discussing model performance. | |
Indeed, a high accuracy in this part of the train dataset could hide the over-fitting of the model. | |
That is what I've been dealing with when reaching 0.911 accuracy of the model and 0.817 mean_iou on the train dataset: | |
I can see over-fitting while over-performance on the train dataset (see picture below). | |
Due to this challenge evaluation criteria ,this model was used for submission (before update of the 27-01-2025) | |
<img src="https://cdn-uploads.huggingface.co/production/uploads/666354284044e2b1c3287c22/LptPoEeSGH22MG_XftxdP.png" alt="over-fitting" width="600"> | |
As a professional data scientist, I have trained another model that has been evaluated on the "val" dataset (see picture below) | |
<img src="https://cdn-uploads.huggingface.co/production/uploads/666354284044e2b1c3287c22/k00JqZDpHxGqKMweZQzw0.png" alt="No over-fitting" width="600"> | |
Still reaching a model accuracy of 0.907 and max iou of 0.808 (before update of the 27-01-2025 - on 20% of'train') | |
UPDATE: A model accuracy of 0.799 and max iou of 0.740 (on 'val') | |