text
stringlengths 0
820
|
---|
achieving comparable results with SITS-Former despite having 6×fewer parameters (shown in Table
|
6). This shows that Presto can ingest timeseries at different temporal resolutions and at varying
|
intervals .
|
In addition, the S2-Agri dataset is missing pixel location metadata, which is always passed to Presto
|
during pre-training. S2-Agri was sampled from a single S2-tile, so we used the location of the central
|
pixel of this tile for all pixels in the dataset. Even with this much less accurate location metadata,
|
Presto remained performant.
|
10
|
Table 7: Structured masking strategies yield the best downstream performance . We measured
|
Presto R’s F1 score on the CropHarvest validation task. Combining structured strategies outperformed
|
the “Random” masking employed by (He et al., 2022).
|
Channel
|
GroupsRandom TimestepsContiguous
|
TimestepsF1
|
Score
|
✓ 0.646
|
✓ 0.653
|
✓ 0.664
|
✓ 0.649
|
✓ ✓ ✓ ✓ 0.665
|
5.4 Ablations
|
We conducted three ablations to better understand Presto’s performance:
|
•Structured masking strategies perform best : Table 7 shows results from ablating the masking
|
strategies. Unlike other masked autoencoder methods (He et al., 2022), we found that combining
|
structured masking with random masking outperforms random masking alone.
|
•Pre-training Presto is critical to achieve strong performance : In Tables 3, 5 and Table 6, we
|
compared the performance of a randomly -initialized Presto architecture with the pre-trained model.
|
Pre-training yielded a significant increase in performance (a 50% increase in accuracy on the
|
S2-Agri 100dataset). Even when the downstream training dataset size was large (EuroSat has
|
21,600 training samples), pre-training yielded a 14% increase in accuracy given RGB inputs and
|
up to 22% increase in accuracy at lower resolutions (Table 11). For TreeSatAI with S1 data (Table
|
15), a randomly initialized model slightly outperformed the pre-trained model. We hypothesize
|
that this is due to the difference in input relative to the pre-training data, since the TreetSatAI input
|
consists of a single image from only one timestep and one channel group.
|
•Presto’s performance scales with model size : To measure how different model sizes affect Presto’s
|
performance, we pre-trained two larger Presto variants: a deeper variant with 4 encoder layers
|
instead of 2, and a wider variant with a doubled encoder size (Table 8). Performance improved
|
as model size increased, suggesting that practitioners who can afford greater computational costs
|
could obtain better results by training a larger Presto model.
|
6 Discussion & Conclusion
|
Limitations Presto is designed to ingest 10m/px resolution imagery and is pre-trained on products
|
at this scale. This decision is motivated by the free, global availability over time of products at
|
this scale (such as Sentinel-1 and Sentinel-2). Presto does not natively process very-high resolution
|
imagery such as <1m/px imagery from commercial satellites or drones, which can be costly and
|
often lack complete coverage globally and temporally. In addition, Presto is a pixel-timeseries model.
|
While we demonstrated Presto’s flexibility on single-timestep image datasets, image-based models
|
may be preferred if a user’s goal is to process entire images to make a prediction. We observed that
|
Presto’s performance on the EuroSAT dataset plateaued as the input resolution increased (Table 5),
|
due to images from classes where the relevant pixels for the class are a minority of the pixels in the
|
image (e.g., highways). In such scene classification challenges, image-based models which can learn
|
the shape of the relevant pixels may be better suited. We discuss this further in Section A.6.
|
Conclusion We present Presto: a lightweight, pre-trained timeseries transformer for remote sensing.
|
By leveraging structure unique to remote sensing data—specifically, (i) an important temporal
|
dimension, (ii) associated metadata and (iii) a diversity of sensors, we are able to train an extremely
|
lightweight model which achieves state-of-the-art results in a wide variety of globally distributed
|
evaluation tasks. Computational efficiency is of paramount importance in remote sensing settings
|
and often determines which models ultimately get selected for deployment. We demonstrated that
|
strong performance can be achieved while meeting this constraint, and that self-supervised learning
|
can provide significant benefits even for small models.
|
11
|
Table 8: Effect of model size on validation performance . To understand the effect of model size
|
on performance, we pre-train two larger variants of Presto. As in Table 7, we measure Presto R’s
|
performance on the CropHarvest validation task. The number of parameters includes both the encoder
|
and decoder parameters. The FLOPS are computed for a “full” input (12 timesteps, with no missing
|
channels), when passed through the encoder and decoder.
|
Depth Width# params
|
(M)FLOPs
|
(M)F1
|
score
|
2 128 0.81 88.94 0.665
|
2 256 2.02 220.81 0.687
|
4 128 1.21 132.42 0.669
|
Impact statement
|
Machine learning applications to remote sensing have a wide range of societally beneficial outcomes,
|
ranging from tracking progress on sustainable development goals (Ferreira et al., 2020) to improved
|
weather forecasting (English et al., 2013; V oosen, 2020) to disaster management (Kansakar and
|
Hossain, 2016).
|
Presto is designed to be accessible to a wide range of practitioners; we achieve this by only training
|
Presto on publicly available data and by keeping the model size small enough so it can be leveraged
|
in compute-constrained environments. In addition to increasing Presto’s accessibility, its small size
|
also lowers its carbon footprint (Strubell et al., 2019).
|
As described by Tuia et al. (2023), a natural concern when applying machine learning algorithms to
|
remote sensing data is its use to collect information about individuals who are unaware that data is
|
being collected, and therefore cannot consent to this practice. We therefore encourage deployment
|
of Presto in collaboration with local communities and stakeholders (Krafft; Kshirsagar et al., 2021;
|
Nakalembe and Kerner, 2023).
|
Acknowledgements
|
This work was supported by NASA under the NASA Harvest Consortium on Food Security and
|
Agriculture (Award #80NSSC18M0039). This research was enabled in part by compute resources
|
provided by Mila (mila.quebec); in addition, we acknowledge material support from NVIDIA
|
Corporation in the form of computational resources. We thank Esther Rolf and Caleb Robinson for
|
reviewing drafts of this manuscript.
|
References
|
Earth engine data catalogue. https://developers.google.com/earth-engine/datasets/
|
catalog . Accessed: 2023-01-31.
|
Tick tick bloom: Harmful algal bloom detection challenge.
|
https://www.drivendata.org/competitions/143/tick-tick-bloom/page/649/, 2023. Accessed:
|
2023-03-10.
|
S. 90m Digital Elevation Data. The CGIAR consortium for spatial information, 2003.
|
C. Abys, S. Skakun, and I. Becker-Reshef. Two decades of winter wheat expansion and intensification
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.