text
stringlengths 0
820
|
---|
Esther Rolf, Michael I Jordan, and Benjamin Recht. Post-estimation smoothing: A simple baseline |
for learning with side information. In International Conference on Artificial Intelligence and |
Statistics , pp. 1759–1769. PMLR, 2020. |
Esther Rolf, Jonathan Proctor, Tamma Carleton, Ian Bolliger, Vaishaal Shankar, Miyabi Ishihara, |
Benjamin Recht, and Solomon Hsiang. A generalizable and accessible approach to machine |
learning with global satellite imagery. Nature communications , 12(1):1–11, 2021. |
Patrick Schratz, Jannes Muenchow, Eugenia Iturritxa, Jakob Richter, and Alexander Brenning. Hy- |
perparameter tuning and performance assessment of statistical and machine-learning algorithms |
using spatial data. Ecological Modelling , 406:109–120, 2019. |
Helen R Sofaer, Catherine S Jarnevich, Ian S Pearse, Regan L Smyth, Stephanie Auer, Gericke L |
Cook, Thomas C Edwards Jr, Gerald F Guala, Timothy G Howard, Jeffrey T Morisette, et al. |
Development and delivery of species distribution models to inform decision-making. BioScience , |
69(7):544–557, 2019. |
Insang Song and Daehyun Kim. Three common machine learning algorithms neither enhance pre- |
diction accuracy nor reduce spatial autocorrelation in residuals: An analysis of twenty-five so- |
cioeconomic data sets. Geographical Analysis , 2022. |
Stephen V Stehman, Bruce W Pengra, Josephine A Horton, and Danika F Wellington. Validation of |
the us geological survey’s land change monitoring, assessment and projection (lcmap) collection |
1.0 annual land cover products 1985–2017. Remote Sensing of Environment , 265:112646, 2021. |
Devis Tuia, Claudio Persello, and Lorenzo Bruzzone. Domain adaptation for the classification of |
remote sensing data: An overview of recent advances. IEEE geoscience and remote sensing |
magazine , 4(2):41–57, 2016. |
Roozbeh Valavi, Jane Elith, Jos ´e J Lahoz-Monfort, and Gurutzeta Guillera-Arroita. blockcv: An r |
package for generating spatially or environmentally separated folds for k-fold cross-validation of |
species distribution models. bioRxiv , pp. 357798, 2018. |
Alexandre MJ-C Wadoux, Gerard BM Heuvelink, Sytze De Bruin, and Dick J Brus. Spatial cross- |
validation is not the right way to evaluate map accuracy. Ecological Modelling , 457:109692, |
2021. |
May Yuan and Arlo McKee. Embedding scale: New thinking of scale in machine learning and |
geographic representation. Journal of Geographical Systems , 24(3):501–524, 2022. |
7 |
Scale-MAE: A Scale-Aware Masked Autoencoder for Multiscale Geospatial |
Representation Learning |
Colorado J Reed1,2*, Ritwik Gupta1*, Shufan Li1*, |
Sarah Brockman3, Christopher Funk3, Brian Clipp3, |
Kurt Keutzer1, Salvatore Candido2, Matt Uyttendaele2, Trevor Darrell1 |
1Berkeley AI Research;2Meta AI, FAIR;3Kitware Inc. |
correspondence to [email protected] |
Abstract |
Large, pretrained models are commonly finetuned with |
imagery that is heavily augmented to mimic different condi- |
tions and scales, with the resulting models used for various |
tasks with imagery from a range of spatial scales. Such |
models overlook scale-specific information in the data for |
scale-dependent domains, such as remote sensing. In this |
paper, we present Scale-MAE , a pretraining method that ex- |
plicitly learns relationships between data at different, known |
scales throughout the pretraining process. Scale-MAE pre- |
trains a network by masking an input image at a known input |
scale, where the area of the Earth covered by the image deter- |
mines the scale of the ViT positional encoding, not the image |
resolution. Scale-MAE encodes the masked image with a |
standard ViT backbone, and then decodes the masked image |
through a bandpass filter to reconstruct low/high frequency |
images at lower/higher scales. We find that tasking the net- |
work with reconstructing both low/high frequency images |
leads to robust multiscale representations for remote sensing |
imagery. Scale-MAE achieves an average of a 2.4−5.6% |
non-parametric kNN classification improvement across eight |
remote sensing datasets compared to current state-of-the-art |
and obtains a 0.9mIoU to 1.7mIoU improvement on the |
SpaceNet building segmentation transfer task for a range of |
evaluation scales. |
1. Introduction |
Remote sensing data is captured from satellites and planes |
through a mixture of sensors, processing pipelines, and view- |
ing geometries. Depending on the composition and relative |
geometry of the sensor to the Earth, each image’s Ground |
Sample Distance (GSD - the physical distance between two |
*Denotes co-first authorship. Co-first authors will prioritize their names |
on their resumes/websites. |
Ground Truth Input Image Scale-MAE Vanilla MAE |
Correct Incorrect0.3m GSD0.3m GSD |
3.0m GSD3.0m GSDFigure 1. Scale-MAE learns better representations for multiscale |
tasks compared to vanilla MAE. (Column 1) The top image spans |
an area at 0.3m GSD and the bottom image shows the same region |
at a coarser GSD. (Columns 2-4) The following columns show |
a ground truth building segmentation, Scale-MAE segmentation |
from a finetuned UperNet, and segmentation from an analogously |
finetuned UperNet from a vanilla MAE, respectively. Scale-MAE |
demonstrates better performance across images at both scales. See |
the supplementary material for more examples. |
adjacent pixels in an image) can vary from 0.3m to 1km, so a |
100x100 pixel image could span anywhere from an Olympic- |
size swimming pool (900 m2) to almost the entire country of |
Jamaica (10,000 km2). The data within each image, and the |
corresponding objects and points of interest, can therefore |
vary across wide spatial ranges. Data from these multiscale |
sensors provide critical and complementary information for |
various operational and research applications in areas such |
as atmospheric, hydrologic, agricultural, and environmental |
monitoring [45, 52]. |
Few modern computer vision methods have explicitly ad- |
dressed multiscale remote sensing imagery [35]. Neverthe- |
less, the remote sensing vision community has increasingly |
used large, pretrained models [13, 20], where such appli- |
cations finetune a pretrained model for a single source ofarXiv:2212.14532v4 [cs.CV] 22 Sep 2023 |
Patchify + Mask |
Resampled I |
224px, .7m GSDResampled I |
Subsets and Splits