text
stringlengths 0
820
|
---|
International journal of digital earth , 2019.
|
M. Tarasiou, E. Chavez, and S. Zafeiriou. ViTs for SITS: Vision Transformers for Satellite Image
|
Time Series. In CVPR , 2023.
|
G. Tseng, I. Zvonkov, C. L. Nakalembe, and H. Kerner. Cropharvest: A global dataset for crop-type
|
classification. In NeurIPS, Datasets and Benchmarks Track , 2021. URL https://openreview.
|
net/forum?id=JtjzUXPEaCu .
|
G. Tseng, H. Kerner, and D. Rolnick. TIML: Task-informed meta-learning for crop type mapping. In
|
AI for Agriculture and Food Systems at AAAI , 2022.
|
D. Tuia, K. Schindler, B. Demir, G. Camps-Valls, X. X. Zhu, M. Kochupillai, S. Džeroski, J. N.
|
van Rijn, H. H. Hoos, F. Del Frate, et al. Artificial intelligence to advance earth observation: a
|
perspective. arXiv preprint arXiv:2305.08413 , 2023.
|
K. Van Tricht. Mapping crops at global scale! what works and what doesn’t? https://blog.vito.
|
be/remotesensing/worldcereal-benchmarking , 2021. Accessed: 2023-07-31.
|
14
|
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin.
|
Attention is all you need. NeurIPS , 2017.
|
P. V oosen. Europe builds ‘digital twin’of earth to hone climate forecasts, 2020.
|
S. Wang, W. Chen, S. M. Xie, G. Azzari, and D. B. Lobell. Weakly supervised deep learning for
|
segmentation of remote sensing imagery. Remote Sensing , 2020a.
|
S. Wang, S. Di Tommaso, J. M. Deines, and D. B. Lobell. Mapping twenty years of corn and soybean
|
across the us midwest using the landsat archive. Scientific Data , 2020b.
|
B. Yifang, P. Gong, and C. Gini. Global land cover mapping using earth observation satellite data:
|
Recent progresses and challenges. ISPRS journal of photogrammetry and remote sensing , 2015.
|
J. You, X. Li, M. Low, D. Lobell, and S. Ermon. Deep gaussian process for crop yield prediction
|
based on remote sensing data. Proceedings of the AAAI Conference on Artificial Intelligence , 2017.
|
Y . Yuan and L. Lin. Self-supervised pretraining of transformers for satellite image time series
|
classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing ,
|
14:474–487, 2020.
|
Y . Yuan, L. Lin, Q. Liu, R. Hang, and Z.-G. Zhou. Sits-former: A pre-trained spatio-spectral-temporal
|
representation model for sentinel-2 time series classification. International Journal of Applied
|
Earth Observation and Geoinformation , 106:102651, 2022.
|
15
|
A Appendix
|
Reproducibility
|
All code and data used to train and evaluate Presto will be made available upon publication, and
|
the code is currently available at https://github.com/nasaharvest/presto . In addition, we
|
discuss specific implementation details in Appendices A.1 and A.4. We have strived to make the
|
Presto codebase accessible to other practitioners; to this end, we include a demo Jupyter notebook
|
demonstrating how Presto can be applied to a new downstream task, which is available at https:
|
//github.com/nasaharvest/presto/blob/main/downstream_task_demo.ipynb .
|
A.1 Pre-training details
|
We outline training hyperparameters below:
|
•Training length : We train the model for 20 epochs, with a batch size of 4096 (resulting in 5950
|
batches per epoch). On a single NVIDIA V100 GPU, this takes 431
|
4hours.
|
•Optimizer and learning rate : We train the model with an AdamW optimizer. We use a cosine
|
annealing schedule for our learning rate, with a maximum learning rate of 0.001 at the 2ndepoch.
|
We apply a weight decay of 0.05, and βs of (0.9, 0.95).
|
•Masking : We use a masking ratio of 0.75, randomly selecting (for each instance) a masking
|
strategy from the ones described in Section 3.3. If the masking strategy cannot mask the right
|
number of tokens, we randomly mask additional tokens to achieve the correct masking ratio.
|
A.1.1 Pre-training data
|
Figure 6: The distribution of the pre-training dataset described in Section 3.1.
|
Remote sensing models can be deployed in a wide range of geographies, with few labelled datapoints
|
available at fine-tuning time (Kerner et al., 2020; Böhm et al., 2022). We therefore aim to collect
|
a globally representative pre-training dataset. We achieve this by following the sampling strategy
|
used by Dynamic World (Brown et al., 2022). We divide the Earth into three regions: the Western
|
Hemisphere and two regions in the Eastern Hemisphere. These regions are further divided into
|
ecoregions, and stratified samples are gathered from each region using land cover classes as sampling
|
strata. Figure 6 shows the resulting geographical distribution. Each sample represents a 510×510
|
pixel tile with a spatial resolution of 10 meter per pixel. To obtain pixel-timeseries we grid-sample
|
2,500 pixels from each sample, yielding a total of 21,535,000 pixel samples (each with 24 one-month
|
timesteps).
|
A.1.2 Input data
|
We leverage the following data products when pre-training Presto:
|
16
|
Table 9: Model sizes and FLOPs required to encode a single EuroSat image (or pixel, for Presto), as
|
measured by the thop library. When plotting results in Table 5, we multiply the FLOPs for Presto by
|
the number of pixels encoded for an image. At its highest resolution, EuroSAT images are 64×64,
|
so Presto FLOPs for a full resolution image can be obtained by multiplying the per-pixel FLOPs by
|
4,096. We include this value in brackets for completeness.
|
Model Backbone Params (M) MegaFlops
|
SatMAE (RGB) (Cong et al., 2022) ViT-Large 303.10 59,685.69
|
SatMAE (MS) (Cong et al., 2022) ViT-Large 305.96 535,515.25
|
ScaleMAE (Reed et al., 2022) ViT-Large 303.10 59,685.69
|
ConvMAE (Gao et al., 2022) ConvMAE-Large 88.78 23,315.58
|
SeCo (Manas et al., 2021) ResNet-18 11.69 149.37
|
GASSL (Ayush et al., 2021) ResNet-18 11.69 149.37
|
Presto RGB pixel (image) Presto 0.40 0.79 (3,235.84)
|
Presto MS pixel (image) Presto 0.40 2.37 (9,707.52)
|
•Sentinel-1 Synthetic Aperture Radar observations (S1): The VV (emit and receive at vertical
|
polarization) and VH (emit at vertical and receive at horizontal polarization) bands: 2 real-valued
|
dynamic values per monthly timestep.
|
•Sentinel-2 Multispectral images (S2): We removed the 60m resolution bands, yielding bands
|
with 10m and 20m resolution with channels in the visible, near-infrared and short-wave infrared
|
range: 10 real-valued dynamic values per timestep.
|
•ERA5 Climate Reanalysis Meteorological data (ERA5): Monthly total precipitation and temper-
|
ature at 2 metres above the ground: 2 real-valued dynamic values per timestep.
|
•NDVI (Rouse et al., 1974): Computed from the red (B4) and near-infrared (B8) Sentinel-2 bands:
|
1 real-valued dynamic value per timestep.
|
•Dynamic World Land Cover classes (DW, Brown et al., 2022): Land cover classes produced for
|
every non-cloudy Sentinel-2 image: 1 dynamic categorical value from the set of possible classes V
|
per timestep. We took the mode of classes for all timesteps within a month.
|
•Topography data (TG), from the Shuttle Radar Topography Mission’s Digital Elevation Model:
|
The elevation and slope of each pixel, real-valued and static in time.
|
•Coordinates (Loc): 3D static in time Cartesian coordinates computed from the latitude and longi-
|
tude of the pixel’s geographical location: sLoc= [cos( lat)×cos(lon),cos(lat)×sin(lon),sin(lat)].
|
A.1.3 Channel Groups
|
As described in Section 3.2, we transform the pixel timeseries xinto a number of tokens, where each
|
token is a linear transformation of a subset of the input channels. We group together channels which
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.