joshvm commited on
Commit
1fb677b
·
verified ·
1 Parent(s): 5be5543

update readme and plots

Browse files
README.md ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - semantic-segmentation
5
+ - vision
6
+ - ecology
7
+ datasets:
8
+ - restor/tcd
9
+ pipeline_tag: image-segmentation
10
+ widget:
11
+ - src: samples/610160855a90f10006fd303e_10_00418.tif
12
+ example_title: Urban scene
13
+ license: cc
14
+ metrics:
15
+ - accuracy
16
+ - f1
17
+ - iou
18
+ ---
19
+
20
+ # Model Card for Restor's UNet-based TCD models
21
+
22
+ This is a semantic segmentation model that can delineate tree cover in high resolution (10 cm/px) aerial images.
23
+
24
+ This model card is mostly the same for all similar models uploaded to Hugging Face. The model name refers to the specific architecture variant (e.g. nvidia-mit-b0 to nvidia-mit-b5) but the broad details for training and evaluation are identical.
25
+
26
+ This repository is for `tcd-unet-r34`
27
+
28
+ ## Model Details
29
+
30
+ ### Model Description
31
+
32
+ This semantic segmentation model was trained on global aerial imagery and is able to accurately delineate tree cover in similar images. The model does not detect individual trees, but provides a per-pixel classification of tree/no-tree.
33
+
34
+ - **Developed by:** [Restor](https://restor.eco) / [ETH Zurich](https://ethz.ch)
35
+ - **Funded by:** This project was made possible via a (Google.org impact grant)[https://blog.google/outreach-initiatives/sustainability/restor-helps-anyone-be-part-ecological-restoration/]
36
+ - **Model type:** Semantic segmentation (binary class)
37
+ - **License:** CC-BY-NC; CC-BY to follow
38
+ - **Finetuned from model:** UNet
39
+
40
+ [UNet](https://link.springer.com/chapter/10.1007/978-3-319-24574-4_28) is a classic CNN-based segmentation architecture that has stood the test of time, is easy to implement and deploy and for the most part works well.
41
+
42
+ ### Model Sources
43
+
44
+ - **Repository:** https://github.com/restor-foundation/tcd
45
+ - **Paper:** We will release a preprint shortly.
46
+
47
+ ## Uses
48
+
49
+ The primary use-case for this model is asessing canopy cover from aerial images (i.e. percentage of study area that is covered by tree canopy).
50
+
51
+ ### Direct Use
52
+
53
+ This model is suitable for inference on a single image tile. For performing predictions on large orthomosaics, a higher level framework is required to manage tiling source imagery and stitching predictions. Our repository provides a comprehensive reference implementation of such a pipeline and has been tested on extremely large images (country-scale).
54
+
55
+ The model will give you predictions for an entire image. In most cases users will want to predict cover for a specific region of the image, for example a study plot or some other geographic boundary. If you predict tree cover in an image you should perform some kind of region-of-interest analysis on the results. Our linked pipeline repository supports shapefile-based region analysis.
56
+
57
+ ### Out-of-Scope Use
58
+
59
+ While we trained the model on globally diverse imagery, some ecological biomes are under-represented in the training dataset and performance may vary. We therefore encourage users to experiment with their own imagery before using the model for any sort of mission-critical use.
60
+
61
+ The model was trained on imagery at a resolution of 10 cm/px. You may be able to get good predictions at other geospatial resolutions, but the results may not be reliable. In particular the model is essentially looking for "things that look like trees" and this is highly resolution dependent. If you want to routinely predict images at a higher or lower resolution, you should fine-tune this model on your own or a resampled version of the training dataset.
62
+
63
+ The model does not predict biomass, canopy height or other derived information. It only predicts the likelihood that some pixel is covered by tree canopy.
64
+
65
+ As-is, the model is not suitable for carbon credit estimation.
66
+
67
+ ## Bias, Risks, and Limitations
68
+
69
+ The main limitation of this model is false positives over objects that look like, or could be confused as, trees. For example large bushes, shrubs or ground cover that looks like tree canopy.
70
+
71
+ The dataset used to train this model was annotated by non-experts. We believe that this is a reasonable trade-off given the size of the dataset and the results on independent test data, as well as empirical evaluation during operational use at Restor on partner data. However, there are almost certainly incorrect labels in the dataset and this may translate into incorrect predictions or other biases in model output. We have observed that the models tend to "disagree" with training data in a way that is probably correct (i.e. the aggregate statistics of the labels are good) and we are working to re-evaluate all training data to remove spurious labels.
72
+
73
+ We provide cross-validation results to give a robust estimate of prediction performance, as well as results on independent imagery (i.e. images the model has never seen) so users can make their own assessments. We do not provide any guarantees on accuracy and users should perform their own independent testing for any kind of "mission critical" or production use.
74
+
75
+ There is no substitute for trying the model on your own data and performing your own evaluation; we strongly encourage experimentation!
76
+
77
+ ## How to Get Started with the Model
78
+
79
+ You can see a brief example of inference in [this Colab notebook](https://colab.research.google.com/drive/1N_rWko6jzGji3j_ayDR7ngT5lf4P8at_).
80
+
81
+ For end-to-end usage, we direct users to our prediction and training [pipeline](https://github.com/restor-foundation/tcd) which also supports tiled prediction over arbitrarily large images, reporting outputs, etc.
82
+
83
+ ## Training Details
84
+
85
+ ### Training Data
86
+
87
+ The training dataset may be found [here](https://huggingface.co/datasets/restor/tcd), where you can find more details about the collection and annotation procedure. Our image labels are largely released under a CC-BY 4.0 license, with smaller subsets of CC BY-NC and CC BY-SA imagery.
88
+
89
+ ### Training Procedure
90
+
91
+ We used a 5-fold cross-validation process to adjust hyperparameters during training, before training on the "full" training set and evaluating on a holdout set of images. The model in the main branch of this repository should be considered the release version.
92
+
93
+ We used [Pytorch Lightning](https://lightning.ai/) as our training framework with hyperparameters listed below. The training procedure is straightforward and should be familiar to anyone with experience training deep neural networks.
94
+
95
+ A typical training command using our pipeline for this model:
96
+
97
+ ```bash
98
+ tcd-train semantic unet-r34 data.output= ... data.root=/mnt/data/tcd/dataset/holdout data.tile_size=1024
99
+ ```
100
+
101
+ #### Usage
102
+
103
+ This model does not work directly with the `transformers` library. The weights are compatibly with Segmentation Models Pytorch and can be loaded directly as follows:
104
+
105
+ ```python
106
+ import segmentation_models_pytorch as smp
107
+ import torch
108
+
109
+ unet = smp.Unet(encoder_name="resnet34",
110
+ classes=2,
111
+ in_channels=3)
112
+ unet.load_state_dict(torch.load("model.pt"), strict=True)
113
+ ```
114
+
115
+ Or you can use the `tcd-predict` script in our pipeline:
116
+
117
+ ```bash
118
+ tcd-predict restor/tcd-unet-r34 <path-to-image> <path-to-output>
119
+ ```
120
+
121
+ #### Training Hyperparameters
122
+
123
+ - Image size: 1024 px square
124
+ - Learning rate: initially 1e-3
125
+ - Learning rate schedule: reduce on plateau
126
+ - Optimizer: AdamW
127
+ - Loss function: Focal
128
+ - Augmentation: random crop to 1024x1024, arbitrary rotation, flips, colour adjustments
129
+ - Number of epochs: 75 during cross-validation to ensure convergence; 50 for final models
130
+ - Normalisation: Imagenet statistics
131
+
132
+ #### Speeds, Sizes, Times
133
+
134
+ You should be able to evaluate the model on a CPU however you will need a lot of available RAM if you try to infer large tile sizes. In general we find that 1024 px inputs are as large as you want to go, given the fixed size of the output segmentation masks (i.e. it is probably better to perform inference in batched mode at 1024x1024 px than try to predict a single 2048x2048 px image).
135
+
136
+ All models were trained on a single GPU with 24 GB VRAM (NVIDIA RTX3090) attached to a 32-core machine with 64GB RAM. All but the largest models can be trained in under a day on a machine of this specification. The smallest models take under half a day, while the largest models take just over a day to train.
137
+
138
+ Feedback we've received from users (in the field) is that landowners are often interested in seeing the results of aerial surveys, but data bandwidth is often a prohibiting factor in remote areas. One of our goals was to support this kind of in-field usage, so that users who fly a survey can process results offline and in a reasonable amount of time (i.e. on the order of an hour).
139
+
140
+ ## Evaluation
141
+
142
+ We report evaluation results on the OAM-TCD holdout split.
143
+
144
+ ### Testing Data
145
+
146
+ The training dataset may be found [here](https://huggingface.co/datasets/restor/tcd).
147
+
148
+ This model (`main` branch) was trained on all `train` images and tested on the `test` (holdout) images.
149
+
150
+ ![Training loss](train_loss.png)
151
+
152
+ ### Metrics
153
+
154
+ We report F1, Accuracy and IoU on the holdout dataset, as well as results on a 5-fold cross validation split. Cross validtion is visualised as min/max error bars on the plots below.
155
+
156
+ ### Results
157
+
158
+ ![Validation loss](val_loss.png)
159
+ ![IoU](val_jaccard_index.png)
160
+ ![Accuracy (foreground)](val_multiclassaccuracy_tree.png)
161
+ ![F1 Score](val_multiclassf1score_tree.png)
162
+
163
+ ## Environmental Impact
164
+
165
+ This estimate is the maximum (in terms of training time) for the UNet family of models presented here. Resnet-50 based models should train in under a day.
166
+
167
+ - **Hardware Type:** NVIDIA RTX3090
168
+ - **Hours used:** < 36
169
+ - **Carbon Emitted:** 5.44 kg CO2 equivalent per model
170
+
171
+ Carbon emissions were be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
172
+
173
+ This estimate does not take into account time require for experimentation, failed training runs, etc. For example since we used cross-validation, each model actually required approximately 6x this estimate - one run for each fold, plus the final run.
174
+
175
+ Efficient inference on CPU is possible for field work, at the expense of inference latency. A typical single-battery drone flight can be processed in minutes.
176
+
177
+ ## Citation
178
+
179
+ We will provide a preprint version of our paper shortly. In the mean time, please cite as:
180
+
181
+ **BibTeX:**
182
+
183
+ ```latex
184
+ @unpublished{restortcd,
185
+ author = "Veitch-Michaelis, Josh and Cottam, Andrew and Schweizer, Daniella Schweizer and Broadbent, Eben N. and Dao, David and Zhang, Ce and Almeyda Zambrano, Angelica and Max, Simeon",
186
+ title = "OAM-TCD: A globally diverse dataset of high-resolution tree cover maps",
187
+ note = "In prep.",
188
+ month = "06",
189
+ year = "2024"
190
+ }
191
+ ```
192
+
193
+ ## Model Card Authors
194
+ Josh Veitch-Michaelis, 2024; on behalf of the dataset authors.
195
+
196
+ ## Model Card Contact
197
+
198
+ Please contact josh [at] restor.eco for questions or further information.
train_loss.png ADDED
val_jaccard_index.png ADDED
val_loss.png ADDED
val_multiclassaccuracy_tree.png ADDED
val_multiclassf1score_tree.png ADDED