text
stringlengths 0
820
|
---|
(a) Image
|
(b) Mask
|
(c) Prediction
|
Figure 5: Landsat 7 ETM+ TOA image, ground truth mask, and prediction made by a U-Net with a
|
ResNet-18 backbone pre-trained using MoCo and SSL4EO-L and fine-tuned on L7 Irish.
|
Figure 5 shows an example prediction made by a U-Net pre-trained on SSL4EO-L and fine-tuned on
|
L7 Irish. The model is able to correctly detect the majority of clouds in the image, but fails to detect
|
cloud shadow due to its infrequent appearance in the training dataset. However, the model actually
|
does a better job than the human annotator in the lower left corner, where the “ground truth” mask
|
misses substantial cloud and thin cloud.
|
5 Limitations
|
There are a few limitations of the sampling method we chose to create our pre-training dataset. Due
|
to low light levels near the poles, Landsat satellites do not capture images above 81.8° latitudes [77],
|
and do not produce SR products above 76° latitudes.4The additional 23.5° tilt of the Earth’s axis
|
during the winter [78] means that it is not possible to collect imagery for all 4 seasons above 52.5°
|
latitude. It may be possible to relax this constraint and allow for sampling from locations where 3
|
out of 4 seasons have imagery. Due to cloud cover and lower populations, there is very little imagery
|
of tropical rainforests or polar regions, both of which are common applications of Landsat data.
|
The benchmark datasets we create are limited to the United States and may not adequately re-
|
flect performance in other regions where agricultural practices and crops differ greatly. Ideally, we
|
would create additional global datasets. There exist large global Landsat-based datasets including
|
the Global Forest Cover Change dataset [40]. However, these datasets do not exist during all times
|
when these satellites are active. We would also like to have classification datasets in addition to
|
semantic segmentation datasets. It may be possible to classify images by biome, although this task
|
may be too easy. In future work, we would like to add pre-trained models for MSS data, although
|
this will require a different sampling technique due to limited coverage over most of the world.
|
6 Conclusion
|
In this paper we introduce the SSL4EO-L pre-training dataset, the first ever SSL dataset for Landsat
|
imagery and the largest Landsat dataset in history. We pre-train the first foundation models for
|
the Landsat family of satellites, enabling progress in a multitude of scientific fields that can benefit
|
from remote sensing and deep learning. Additionally, we revitalize the L7 Irish and L8 Biome
|
datasets. We create the first benchmark datasets for the TM and ETM+ SR sensors, allowing direct
|
comparison across all modern Landsat sensors and products. All datasets, model weights, training
|
code, and scripts used to produce our results are distributed via the TorchGeo library, allowing for
|
ease of experimentation and reproduction of our results.
|
4https://www.usgs.gov/landsat-missions/landsat-collection-2-surface-reflectance
|
10
|
Acknowledgments and Disclosure of Funding
|
The authors gratefully acknowledge the computational and data resources provided through the
|
joint high-performance data analytics (HPDA) project “terrabyte” of the German Aerospace Center
|
(DLR) and the Leibniz Supercomputing Center (LRZ). This work was supported by the Helmholtz
|
Association’s Initiative and Networking Fund on the HAICORE@FZJ partition. This work made
|
use of the Illinois Campus Cluster, a computing resource that is operated by the Illinois Campus
|
Cluster Program (ICCP) in conjunction with the National Center for Supercomputing Applications
|
(NCSA) and which is supported by funds from the University of Illinois at Urbana-Champaign. The
|
work was supported in part by the National Science Foundation (NSF) through awards IIS 21-31335,
|
OAC 21-30835, DBI 20-21898, as well as a C3.ai research award and the Taiwan-UIUC Fellowship.
|
References
|
[1] Laura E. P. Rocchio. Virginia T. Norwood: The mother of Landsat. Landsat Science , August
|
2020.
|
[2] Bill P. Clark. Landsat 3 Return Beam Vidicon response artifacts: A report on RBV pho-
|
tographic product characteristics and quality coding system. Technical report, EROS Data
|
Center, U.S. Geological Survey, August 1981.
|
[3] Christopher Engebretson. Landsat Multispectral Scanner (MSS) Collection 2 (C2) Level 1
|
(L1) Data Format Control Book (DFCB). Technical report, Department of the Interior, U.S.
|
Geological Survey, September 2020. LSDS-1416.
|
[4] Christopher Engebretson. Landsat Thematic Mapper (TM) Level 1 (L1) Data Format Control
|
Book (DFCB). Technical report, Department of the Interior, U.S. Geological Survey, February
|
2018. LSDS-284.
|
[5] Jim Lacasse. Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) Level 1 (L1) Data
|
Format Control Book (DFCB). Technical report, Department of the Interior, U.S. Geological
|
Survey, August 2016. LSDS-272.
|
[6] Christopher Engebretson. Landsat 8–9 Operational Land Imager (OLI) - Thermal Infrarer
|
Sensor (TIRS) Collection 2 Level 1 (L1) Data Format Control Book (DFCB). Technical report,
|
Department of the Interior, U.S. Geological Survey, September 2020. LSDS-1822.
|
[7] Nicholas E. Young, Ryan S. Anderson, Stephen M. Chignell, Anthony G. V orster, Rick
|
Lawrence, and Paul H. Evangelista. A survival guide to Landsat preprocessing. Ecology ,
|
98(4):920–932, 2017.
|
[8] Neal Jean, Sherrie Wang, Anshul Samar, George Azzari, David Lobell, and Stefano Ermon.
|
Tile2Vec: Unsupervised representation learning for spatially distributed data. In Proceedings
|
of the AAAI Conference on Artificial Intelligence , volume 33, pages 3967–3974, 2019.
|
[9] Kumar Ayush, Burak Uzkent, Chenlin Meng, Kumar Tanmay, Marshall Burke, David Lo-
|
bell, and Stefano Ermon. Geography-aware self-supervised learning. In Proceedings of the
|
IEEE/CVF International Conference on Computer Vision , pages 10181–10190, 2021.
|
[10] Yezhen Cong, Samar Khanna, Chenlin Meng, Patrick Liu, Erik Rozi, Yutong He, Marshall
|
Burke, David Lobell, and Stefano Ermon. SatMAE: Pre-training transformers for temporal
|
and multi-spectral satellite imagery. Advances in Neural Information Processing Systems , 35:
|
197–211, 2022.
|
[11] Colorado J. Reed, Ritwik Gupta, Shufan Li, Sarah Brockman, Christopher Funk, Brian
|
Clipp, Salvatore Candido, Matt Uyttendaele, and Trevor Darrell. Scale-MAE: A scale-
|
aware masked autoencoder for multiscale geospatial representation learning. arXiv preprint
|
arXiv:2212.14532 , 2022.
|
[12] Oscar Manas, Alexandre Lacoste, Xavier Giró-i-Nieto, David Vazquez, and Pau Rodriguez.
|
Seasonal Contrast: Unsupervised pre-training from uncurated remote sensing data. In Pro-
|
ceedings of the IEEE/CVF International Conference on Computer Vision , pages 9414–9423,
|
2021.
|
11
|
[13] Yi Wang, Nassim Ait Ali Braham, Zhitong Xiong, Chenying Liu, Conrad M. Albrecht, and
|
Xiao Xiang Zhu. SSL4EO-S12: A large-scale multi-modal, multi-temporal dataset for self-
|
supervised learning in Earth observation. arXiv preprint arXiv:2211.07044 , 2022.
|
[14] Di Wang, Jing Zhang, Bo Du, Gui-Song Xia, and Dacheng Tao. An empirical study of remote
|
sensing pretraining. IEEE Transactions on Geoscience and Remote Sensing , 2022.
|
[15] Yi Wang, Conrad M. Albrecht, Nassim Ait Ali Braham, Lichao Mou, and Xiao Xiang Zhu.
|
Self-supervised learning in remote sensing: A review. arXiv preprint arXiv:2206.13188 , 2022.
|
[16] Paul Berg, Minh-Tan Pham, and Nicolas Courty. Self-supervised learning for scene classifica-
|
tion in remote sensing: Current state of the art and perspectives. Remote Sensing , 14(16):3995,
|
2022.
|
[17] Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with Momentum
|
Contrastive learning. arXiv preprint arXiv:2003.04297 , 2020.
|
[18] Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.