File size: 2,101 Bytes
9b69290
 
 
 
 
 
 
 
 
 
 
fc7c6c4
 
 
 
 
 
32a41c8
 
 
c1a9a1c
32a41c8
 
 
 
 
 
 
 
 
 
 
c1a9a1c
32a41c8
 
c597257
 
 
2d0fbea
c597257
2d0fbea
 
 
c597257
 
 
 
 
 
 
 
 
 
 
2d0fbea
c597257
 
2d0fbea
c597257
 
 
 
 
 
2d0fbea
 
 
 
 
 
c597257
32a41c8
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
license: apache-2.0
language:
- en
tags:
- Pytorch
- mmsegmentation
- segmentation
- burn scars
- Geospatial
- Foundation model
datasets:
- ibm-nasa-geospatial/hls_burn_scars
metrics:
- accuracy
- IoU
- F1 Score
---

### Model and Inputs
The pretrained [Prithvi-100m](https://huggingface.co/ibm-nasa-geospatial/burn-scar-Prithvi-100M) parameter model is used for finetuning over the Burn Scar task on HLS data.
The finetuning expected an input tile of 512x512x6, where 512 is the height and width and 6 is the number of bands. The bands are 
1. Blue
2. Green
3. Red
4. Narrow NIR
5. SWIR 1
6. SWIR 2

### Code
Code for Finetuning is available through [github](https://github.com/NASA-IMPACT/hls-foundation-os/tree/main/fine-tuning-examples)

Configuration used for finetuning is available through [config](https://github.com/NASA-IMPACT/hls-foundation-os/blob/main/fine-tuning-examples/configs/firescars_config.py
)

To run inference, first install dependencies 

```
mamba create -n prithvi-burn-scar python=3.10 pycocotools ncurses
mamba activate prithvi-burn-scar
pip install --upgrade pip && \
    pip install -r requirements.txt && \
    mim install mmcv-full==1.5.0
```

#### Instructions for downloading from [HuggingFace datasets](https://huggingface.co/datasets)

1. Create account on https://huggingface.co/join
2. Install `git` following https://git-scm.com/downloads
3. Install git-lfs with `sudo apt install git-lfs` and `git lfs install`
4. Run the following command to download the HLS datasets. You may need to
   enter your HuggingFace username/password to do the `git clone`.

   ```
   mkdir -p data
   cd data/
   git clone https://huggingface.co/datasets/ibm-nasa-geospatial/hls_burn_scars burn_scars
   tar -xzvf burn_scars/hls_burn_scars.tar.gz -C ./
   ```


With the datasets and the environment, you can now run the inference script.

```
python burn_scar_batch_inference_script.py \
-config burn_scars_Prithvi_100M.py \
-ckpt burn_scars_Prithvi_100M.pth \
-input data/burn_scars/validation \
-output data/burn_scars/inference_output \
-input_type tif
```

### Results