text
stringlengths
0
820
Figure 1: Presto learns from structurally-masked remote sensing pixel-timeseries . We construct a
multi-sensor remote sensing pixel-timeseries, and randomly select one of the four masking strategies
described in Section 3.3. The encoder-decoder model is trained to reconstruct the original timeseries.
At fine-tuning time, we discard the decoder and only use the encoder’s output. The downstream task
may have incomplete inputs (missing timesteps or sensors) since the encoder is specifically trained on
such inputs. Presto receives both static-in-time and dynamic-in-time inputs and the location metadata
of each pixel timeseries.
many remote sensing datasets, which are points or irregularly shaped polygons (Rao et al., 2020;
Batjes et al., 2017), requiring additional methods to handle these labels (Wang et al., 2020a).
We introduce the Pretrained Remote Sensing Transf ormer (Presto), a lightweight model designed to
ingest pixel-timeseries inputs from a variety of Earth observation sensors and data products. Presto
operates on individual pixels, using the temporal and multimodal structure of the data instead of the
image structure. To learn powerful representations of remote sensing data that can be adapted to a
wide range of tasks, Presto leverages a self-supervised masked autoencoding approach, reconstructing
unobserved timepoints and sensory modalities. This allows Presto to be robust to missing data and to
flexibly accommodate diverse input formats. We find Presto excels even in image-based tasks where
the temporal dimension is completely absent.
Presto addresses the following requirements, which are critical to the useful deployment of pre-trained
models in the remote sensing context:
•Computational efficiency : When deployed, models built for remote sensing data are typically used
to make contiguous geospatial predictions over millions (or billions) of samples to form a predicted
map. The computational performance of models is therefore one of the primary considerations at
deployment time. Van Tricht (2021), Hengl et al. (2017) and Robinson et al. (2019) are all global-
or large- scale map making efforts that prioritized efficiency over accuracy when deploying remote
sensing models at scale. Presto is competitive with ViT or ResNet based models, despite having up
to1000×fewer trainable parameters and requiring orders of magnitude fewer FLOPs at inference
time.
•Ability to process inputs of varying shapes : Different downstream tasks may require very
different remote sensing inputs. For example, for crop mapping and yield estimation, Sainte
Fare Garnot et al. (2020) and You et al. (2017) discarded all spatial information in the inputs in
favor of emphasizing temporal patterns. We test Presto on a wide range of downstream inputs (for
example, with spatial information present or absent, and with single or multiple timesteps of data),
and find it is competitive with models designed specifically for those inputs.
•Ability to process a range of remote sensing datasets : For fuel moisture estimation, Rao et al.
(2020) found that the inclusion of derived products in addition to raw inputs significantly improved
performance. Presto can ingest a range of static-in-time and dynamic-in-time raw input data as well
as derived product inputs widely used in Earth observation (such as NDVI (Rouse et al., 1974)).
•Ability to handle missing data : The coverage of remote sensing products is often spatially and
temporally incomplete. For example, certain regions experience very high ( >90%) cloud coverage,
reducing the utility of optical measurements such as Sentinel-2 imagery (Sudmanns et al., 2019).
Because Presto ingests a variety of remote sensing inputs, it can leverage alternative data sources if
2
one is missing (for instance, relying on Sentinel-1, which sees through clouds, if Sentinel-2 images
are cloudy).
Our results support the surprising conclusion that a pixel-based approach can in some cases match or
outperform sophisticated computer vision-based approaches. We hypothesize that this is possible
because (i) Presto learns from many semantically dense data sources, allowing it to extract informative
patterns from pixel-timeseries, and (ii) many remote sensing tasks require significantly smaller
receptive fields than those provided by computer vision-based models. Brown et al. (2022) leveraged
such properties to train a model 100×smaller than standard models while achieving state-of-the-art
land-cover segmentation results.
2 Related Work
Architectures for Remote Sensing When processing remote sensing timeseries, transformers have
been extensively investigated either as unmodified architectures (Rußwurm and Körner, 2020) or
as architectures designed for specific tasks (Sainte Fare Garnot et al., 2020; Tarasiou et al., 2023).
Recurrent networks have also been investigated (Kerner et al., 2020; Rußwurm and Körner, 2020).
When treating remote sensing data as single or few (up to 3) timestep images, architectures from
computer vision are commonly used, ranging from ResNets (Manas et al., 2021; Ayush et al., 2021;
Rußwurm et al., 2020) to Vision Transformers (Cong et al., 2022; Reed et al., 2022; Fuller et al.,
2023).
Self-supervised learning for Remote Sensing While contrastive learning has been investigated for
remote sensing (Manas et al., 2021), recent self-supervised learning research has focused on masked
autoencoders (Yuan et al., 2022; Cong et al., 2022; Reed et al., 2022; Fuller et al., 2023). However,
these approaches (i) focus on learning from raw satellite data products (ignoring derived products such
as elevation) and typically only ingest data from a single sensor (the exception being the CROMA
model of Fuller et al. (2023), which ingests both Sentinel-1 and Sentinel-2 data), (ii) ingest very
few or no timesteps (Reed et al. (2022) and Fuller et al. (2023) ingest only one timestep while Cong
et al. (2022) ingest up to three timesteps), (iii) expect data in a certain size (for instance, ViT based
models require spatial dimensions to be present), so that missing data is not handled natively, and (iv)
generally yield larger models ranging from 2.5 million parameters (Yuan and Lin, 2020) to over 300
million parameters for ViT-based methods, making their deployment in compute-constrained settings
challenging.
3 Method
We aim to learn a model, f, which can learn useful representations in a self-supervised manner given
unlabelled remote sensing pixel-timeseries data while meeting the usability requirements outlined
in Section 1. This model can then be applied to a wide variety of downstream remote sensing tasks.
These downstream tasks may contain input data from a range of sensors with differing numbers of
timesteps.
Our approach is based on the masked autoencoding framework (He et al., 2022), in which the network
architecture includes both an encoder ( f) and a decoder ( g). During pre-training, part of the input is
masked out and the encoder embeds the remaining (non-masked) part of the input. The decoder aims
to reconstruct the masked-out part of the input, given the encoder’s output. At fine-tuning time, we
discard gand only use f(either as a feature extractor or a fine-tuneable model) for downstream tasks.
In the sections below, we discuss how Presto customizes this general framework for multi-sensor
remote sensing timeseries data. An overview of the Presto pre-training methodology is shown in
Figure 1, and full pre-training details are in Section A.1.
3.1 Pre-training Data
Self-supervised models for remote sensing must generalize to a wide range of geographies and
tasks (Lacoste et al., 2023). We therefore aimed to collect a globally representative pre-training
dataset. We followed the sampling strategy of Brown et al. (2022) to construct a dataset of 21.5M
pixel samples, each with a resolution of 10m per pixel. Appendix A.1.1 describes the pre-training
dataset construction process in detail. Presto was trained on pixel-timeseries of 12-month contiguous
3
Figure 2: Presto learns to reconstruct channels that are completely masked in a spatially
cohesive manner . In this experiment, we masked only the Sentinel-2 RGB channels; Presto was able
to reconstruct these channels even when they were absent from the input. The reconstructions are
spatially consistent even though Presto only receives single pixel inputs.
intervals, sampled from a 2-year period from the beginning of 2020 until the end of 2021, with
each month represented by one timestep (similar to the approach adopted by Tseng et al. (2021)).
Derived data products that result from the analysis of lower level data (e.g., Parkinson et al. (2006))