text
stringlengths
0
820
(i) come from the same sensor or product, (ii) have equivalent native spatial resolutions and (iii)
represent similar parts of the electromagnetic spectrum (for Sentinel-2 channel groups). We group
the input data into the following channel groups:
•Sentinel-1 : The VV and VH bands from the Sentinel-1 sensor
•Sentinel-2 RGB : The B2, B3 and B4 bands from the Sentinel-2 sensor
•Sentinel-2 Red Edge : The B5, B6 and B7 bands from the Sentinel-2 sensor
•Sentinel-2 Near Infra Red (10m) : The B8 band from the Sentinel-2 sensor
•Sentinel-2 Near Infra Red (20m) : The B8A band from the Sentinel-2 sensor
•Sentinel-2 Short Wave Infra Red : The B11 and B12 bands from the Sentinel-2 sensor
•NDVI : The normalized difference vegetation index, calculated from the Sentinel-2 B4 and B8
bands.
•ERA5 Climatology : Precipitation and temperature at 2m from the ERA5 Climate Reanalysis
product
•Topography : The elevation and slope of a pixel, calculated by the SRTM’s DEM
•Location : The cartesian coordinates of a pixel, computed from the pixel’s latitude and longitude
17
Table 10: Full results for regression tasks from Table 3, including standard error computed from three
runs.
Fuel Moisture Algae Blooms Mean difference
Linear Regression 28.20 0.850 0%
Random Forest 23.84±0.42 1.249±0.02 15.7%
MOSAIKS-1D RF 28.75±0.15 0 .972±0.01 8.15%
Presto FT(random init.) 26.07±0.52 0 .955±0.05 2.40%
Presto FT 25.28±0.30 0 .815±0.03 −7.24%
Presto RF 25.98±0.66 0 .884±0.01 −1.94%
A.2 FLOP calculations
We use the thop library ( https://github.com/Lyken17/pytorch-OpCounter ) to calculate the
FLOPs required to encode a EuroSAT image (as plotted in Table 5(b)). For the SatMAE, ScaleMAE
and ConvMAE models, all images were resized to 224×224, so the FLOPs required to encode
an image is independent of resolution. For Presto, we computed the FLOPs required to encode a
single pixel and multiplied this by the number of pixels in an image at each resolution (e.g. the
“64” resolution has 64×64pixels, so we multiply the FLOPs required to encode a single pixel by
64×64 = 4096 ). The FLOPs calculated by the thop library are recorded in Table 9.
A.3 Baselines
In addition to task-specific baselines, we benchmark Presto against:
•Random Forests : Random forests are powerful baselines in remote sensing as they they remain
competitive with state-of-the-art methods (Pelletier et al., 2019; Kerner et al., 2020). Tree-based
methods, especially random forests, are commonly deployed in large-scale machine learning for
remote sensing applications (Hansen et al., 2013; Van Tricht, 2021; Di Tommaso et al., 2022).
•MOSAIKS-1D : We adapt MOSAIKS (Rolf et al., 2021) for timeseries data. MOSAIKS-1D uses
patches from the pre-training dataset and convolves over the temporal dimension instead of the
spatial dimension. We benchmark MOSAIKS-1D on all timeseries evaluation tasks. Because this
does not work for categorical inputs, we exclude Dynamic World. As with Presto, we use the
output features with random forests (MOSAIKS-1D RF) and with regressions (MOSAIKS-1D R).
A.4 Downstream Results
We include complete results for the evaluation tasks. These include error bars, as well as additional
results reported for the CropHarvest (Table 12 and Figure 3), regression tasks (Table 10), EuroSAT
(Tables 11, 13 and 14), TreeSatAI (Table 15) and Sen2-Agri 100(Table 16) datasets.
We run all downstream classifiers with 3 seeds ( 0,42,84), with the exception of the kNN classifiers
and the linear regression (which are deterministic). In the tables in the main paper (Tables 2, 4, 6 and
3) we report the average of these runs; the standard error is reported in Tables 12,15, 16 and 10.
•Presto as a feature extractor : When used as a feature extractor, a random forest, regression of
K-nearest-neighbours classifier is trained on Presto’s output embeddings. In this case, we use
scikit-learn models with the default hyperparameters. For the CropHarvest tasks, the class labels
are extremely imbalanced; we therefore set class_weight equal to balanced for those tasks, for
both Presto and MOSAIKS-1D.
•Fine-tuning Presto : When fine-tuning Presto, we use the same hyperparameters across all tasks:
an AdamW optimizer with a learning rate of 3e-4 and weight decay of 0.05.
As discussed in Section 5.2, we obtain per-image predictions using Presto by computing a mean and
standard deviation of Presto’s output pixels, and passing a concatenation of these two vectors to a
downstream classifier. This is illustrated in Figure 4.
18
Table 11: Accuracy results for pre-trained and from-scratch Presto when fine-tuned on EuroSat, at
varying resolutions. We hypothesize that the drop in performance for the full resolution (64) RGB
input is due to the model construction; the model outputs for all pixels in the image (4,096 pixels
for the full resolution) are aggregated and passed to a linear layer for classification, yielding a noisy
gradient signal.
Resolution 2 4 8 16 32 64
random init.RGB0.703±0.005 0 .684±0.032 0 .694±0.013 0 .739±0.004 0 .750±0.018 0 .745±0.009
pre-trained 0.792±0.010 0 .837±0.006 0 .847±0.016 0 .865±0.006 0 .872±0.002 0 .849±0.004
random init.MS0.837±0.014 0 .884±0.010 0 .895±0.006 0 .907±0.13 0 .924±0.005 0 .924±0.003
pre-trained 0.898±0.005 0 .925±0.004 0 .939±0.000 0 .950±0.002 0 .958±0.001 0 .953±0.004
Table 12: Additional results for the CropHarvest task. In addition to the F1 scores reported in the
main paper, we report AUC ROC scores, with standard error bars computed with three runs.
Model Kenya Brazil Togo Mean
F1Random Forest 0.559±0.003 0 .000±0.000 0 .756±0.002 0.441
MOSAIKS-1D R 0.790±0.027 0 .746±0.084 0 .679±0.024 0.738
TIML 0.838±0.000 0 .835±0.012 0 .732±0.002 0 .802
Presto R 0.816±0.000 0.891±0.000 0 .798±0.000 0.835
no DW 0.861±0.000 0.888±0.000 0 .760±0.000 0.836
AUC ROCRandom Forest 0.578±0.006 0 .941±0.004 0 .892±0.001 0.803
MOSAIKS-1D R 0.693±0.036 0 .890±0.038 0 .836±0.005 0.806
TIML 0.794±0.003 0 .988±0.001 0 .890±0.000 0 .890
Presto R 0.834±0.000 0.997±0.000 0 .921±0.000 0.917
no DW 0.863±0.000 0.989±0.000 0 .912±0.000 0.921
A.5 Disentangling the effect of pre-training
To understand the effect of pre-training Presto, we fine-tune Presto and train it from scratch on
EuroSat (Table 5), the regression tasks (Table 3 in the main paper) and TreeSatAI (Table 15). We
omit the CropHarvest dataset because it was expressly designed as a few-shot-learning dataset. Its
small size makes the construction of validation sets with which to control the finetuning (e.g. with
early stopping) challenging.
Overall, we find a consistent and significant improvement from the use of pre-trained Presto compared
to a randomly initialized version of the model. For the EuroSat task, pre-training consistently delivers
an incresse in accuracy score >0.1(representing increases in accuracy of up to 25%). This effect is
consistent with what we observe on the TreeSatAI dataset for S2 data and on the regression tasks
(where pre-training reduces RMSE by to 15% on the algae blooms task). For the TreeSatAI dataset
with S1 data, pre-training penalizes the model compared to random initialization - we hypothesize
that this is due to the difference in input (a single timestep and single channel group image) relative
to the pre-training data. The benefit of pre-training effect is especially pronounced on the S2-Agri 100
dataset; we hypothesize this is due to the small training set size.